00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1037 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3699 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.057 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.091 Fetching changes from the remote Git repository 00:00:00.094 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.256 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.256 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.393 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.406 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.418 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.418 > git config core.sparsecheckout # timeout=10 00:00:05.429 > git read-tree -mu HEAD # timeout=10 00:00:05.446 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.469 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.469 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.591 [Pipeline] Start of Pipeline 00:00:05.606 [Pipeline] library 00:00:05.607 Loading library shm_lib@master 00:00:05.607 Library shm_lib@master is cached. Copying from home. 00:00:05.625 [Pipeline] node 00:00:05.639 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.641 [Pipeline] { 00:00:05.650 [Pipeline] catchError 00:00:05.652 [Pipeline] { 00:00:05.660 [Pipeline] wrap 00:00:05.667 [Pipeline] { 00:00:05.675 [Pipeline] stage 00:00:05.677 [Pipeline] { (Prologue) 00:00:05.727 [Pipeline] echo 00:00:05.729 Node: VM-host-SM38 00:00:05.735 [Pipeline] cleanWs 00:00:05.743 [WS-CLEANUP] Deleting project workspace... 00:00:05.743 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.748 [WS-CLEANUP] done 00:00:05.925 [Pipeline] setCustomBuildProperty 00:00:06.021 [Pipeline] httpRequest 00:00:06.390 [Pipeline] echo 00:00:06.391 Sorcerer 10.211.164.20 is alive 00:00:06.398 [Pipeline] retry 00:00:06.399 [Pipeline] { 00:00:06.410 [Pipeline] httpRequest 00:00:06.414 HttpMethod: GET 00:00:06.414 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.414 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.424 Response Code: HTTP/1.1 200 OK 00:00:06.424 Success: Status code 200 is in the accepted range: 200,404 00:00:06.425 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.531 [Pipeline] } 00:00:22.550 [Pipeline] // retry 00:00:22.559 [Pipeline] sh 00:00:22.846 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.865 [Pipeline] httpRequest 00:00:23.177 [Pipeline] echo 00:00:23.178 Sorcerer 10.211.164.20 is alive 00:00:23.186 [Pipeline] retry 00:00:23.188 [Pipeline] { 00:00:23.199 [Pipeline] httpRequest 00:00:23.204 HttpMethod: GET 00:00:23.204 URL: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:23.204 Sending request to url: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:23.215 Response Code: HTTP/1.1 200 OK 00:00:23.215 Success: Status code 200 is in the accepted range: 200,404 00:00:23.216 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:07:01.090 [Pipeline] } 00:07:01.110 [Pipeline] // retry 00:07:01.118 [Pipeline] sh 00:07:01.403 + tar --no-same-owner -xf spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:07:04.708 [Pipeline] sh 00:07:04.986 + git -C spdk log --oneline -n5 00:07:04.986 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:07:04.986 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:07:04.986 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:07:04.986 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:07:04.986 e56f1618f lib/ftl: Add explicit support for write unit sizes of base device 00:07:05.007 [Pipeline] withCredentials 00:07:05.019 > git --version # timeout=10 00:07:05.032 > git --version # 'git version 2.39.2' 00:07:05.048 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:07:05.050 [Pipeline] { 00:07:05.059 [Pipeline] retry 00:07:05.061 [Pipeline] { 00:07:05.075 [Pipeline] sh 00:07:05.385 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:07:05.395 [Pipeline] } 00:07:05.413 [Pipeline] // retry 00:07:05.417 [Pipeline] } 00:07:05.436 [Pipeline] // withCredentials 00:07:05.445 [Pipeline] httpRequest 00:07:05.828 [Pipeline] echo 00:07:05.829 Sorcerer 10.211.164.20 is alive 00:07:05.838 [Pipeline] retry 00:07:05.839 [Pipeline] { 00:07:05.850 [Pipeline] httpRequest 00:07:05.854 HttpMethod: GET 00:07:05.855 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:07:05.855 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:07:05.860 Response Code: HTTP/1.1 200 OK 00:07:05.860 Success: Status code 200 is in the accepted range: 200,404 00:07:05.861 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:07:20.630 [Pipeline] } 00:07:20.649 [Pipeline] // retry 00:07:20.656 [Pipeline] sh 00:07:20.958 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:07:22.869 [Pipeline] sh 00:07:23.145 + git -C dpdk log --oneline -n5 00:07:23.145 eeb0605f11 version: 23.11.0 00:07:23.145 238778122a doc: update release notes for 23.11 00:07:23.145 46aa6b3cfc doc: fix description of RSS features 00:07:23.145 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:07:23.145 7e421ae345 devtools: support skipping forbid rule check 00:07:23.160 [Pipeline] writeFile 00:07:23.175 [Pipeline] sh 00:07:23.452 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:23.463 [Pipeline] sh 00:07:23.740 + cat autorun-spdk.conf 00:07:23.740 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:23.740 SPDK_TEST_NVME=1 00:07:23.740 SPDK_TEST_FTL=1 00:07:23.740 SPDK_TEST_ISAL=1 00:07:23.740 SPDK_RUN_ASAN=1 00:07:23.740 SPDK_RUN_UBSAN=1 00:07:23.740 SPDK_TEST_XNVME=1 00:07:23.740 SPDK_TEST_NVME_FDP=1 00:07:23.740 SPDK_TEST_NATIVE_DPDK=v23.11 00:07:23.740 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:07:23.740 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:23.745 RUN_NIGHTLY=1 00:07:23.748 [Pipeline] } 00:07:23.761 [Pipeline] // stage 00:07:23.774 [Pipeline] stage 00:07:23.776 [Pipeline] { (Run VM) 00:07:23.789 [Pipeline] sh 00:07:24.065 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:24.065 + echo 'Start stage prepare_nvme.sh' 00:07:24.065 Start stage prepare_nvme.sh 00:07:24.065 + [[ -n 1 ]] 00:07:24.065 + disk_prefix=ex1 00:07:24.065 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:07:24.065 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:07:24.065 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:07:24.065 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:24.065 ++ SPDK_TEST_NVME=1 00:07:24.065 ++ SPDK_TEST_FTL=1 00:07:24.065 ++ SPDK_TEST_ISAL=1 00:07:24.065 ++ SPDK_RUN_ASAN=1 00:07:24.065 ++ SPDK_RUN_UBSAN=1 00:07:24.065 ++ SPDK_TEST_XNVME=1 00:07:24.065 ++ SPDK_TEST_NVME_FDP=1 00:07:24.065 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:07:24.065 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:07:24.065 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:24.065 ++ RUN_NIGHTLY=1 00:07:24.065 + cd /var/jenkins/workspace/nvme-vg-autotest 00:07:24.065 + nvme_files=() 00:07:24.065 + declare -A nvme_files 00:07:24.065 + backend_dir=/var/lib/libvirt/images/backends 00:07:24.065 + nvme_files['nvme.img']=5G 00:07:24.065 + nvme_files['nvme-cmb.img']=5G 00:07:24.065 + nvme_files['nvme-multi0.img']=4G 00:07:24.065 + nvme_files['nvme-multi1.img']=4G 00:07:24.065 + nvme_files['nvme-multi2.img']=4G 00:07:24.065 + nvme_files['nvme-openstack.img']=8G 00:07:24.065 + nvme_files['nvme-zns.img']=5G 00:07:24.065 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:24.065 + (( SPDK_TEST_FTL == 1 )) 00:07:24.065 + nvme_files["nvme-ftl.img"]=6G 00:07:24.065 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:24.065 + nvme_files["nvme-fdp.img"]=1G 00:07:24.065 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:24.065 + for nvme in "${!nvme_files[@]}" 00:07:24.065 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:07:24.065 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:24.065 + for nvme in "${!nvme_files[@]}" 00:07:24.065 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:07:24.065 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:07:24.065 + for nvme in "${!nvme_files[@]}" 00:07:24.065 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:07:24.065 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:24.065 + for nvme in "${!nvme_files[@]}" 00:07:24.065 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:07:24.065 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:24.065 + for nvme in "${!nvme_files[@]}" 00:07:24.065 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:07:24.631 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:24.631 + for nvme in "${!nvme_files[@]}" 00:07:24.631 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:07:24.631 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:24.631 + for nvme in "${!nvme_files[@]}" 00:07:24.631 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:07:24.631 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:24.631 + for nvme in "${!nvme_files[@]}" 00:07:24.631 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:07:24.888 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:07:24.888 + for nvme in "${!nvme_files[@]}" 00:07:24.888 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:07:25.145 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:25.145 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:07:25.145 + echo 'End stage prepare_nvme.sh' 00:07:25.145 End stage prepare_nvme.sh 00:07:25.156 [Pipeline] sh 00:07:25.437 + DISTRO=fedora39 00:07:25.437 + CPUS=10 00:07:25.437 + RAM=12288 00:07:25.437 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:25.437 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:07:25.437 00:07:25.437 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:07:25.437 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:07:25.437 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:07:25.437 HELP=0 00:07:25.437 DRY_RUN=0 00:07:25.437 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:07:25.437 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:07:25.437 NVME_AUTO_CREATE=0 00:07:25.437 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:07:25.437 NVME_CMB=,,,, 00:07:25.437 NVME_PMR=,,,, 00:07:25.437 NVME_ZNS=,,,, 00:07:25.437 NVME_MS=true,,,, 00:07:25.437 NVME_FDP=,,,on, 00:07:25.437 SPDK_VAGRANT_DISTRO=fedora39 00:07:25.437 SPDK_VAGRANT_VMCPU=10 00:07:25.437 SPDK_VAGRANT_VMRAM=12288 00:07:25.437 SPDK_VAGRANT_PROVIDER=libvirt 00:07:25.437 SPDK_VAGRANT_HTTP_PROXY= 00:07:25.437 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:25.437 SPDK_OPENSTACK_NETWORK=0 00:07:25.437 VAGRANT_PACKAGE_BOX=0 00:07:25.437 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:07:25.437 FORCE_DISTRO=true 00:07:25.437 VAGRANT_BOX_VERSION= 00:07:25.437 EXTRA_VAGRANTFILES= 00:07:25.437 NIC_MODEL=e1000 00:07:25.437 00:07:25.437 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:07:25.437 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:07:28.023 Bringing machine 'default' up with 'libvirt' provider... 00:07:28.586 ==> default: Creating image (snapshot of base box volume). 00:07:28.843 ==> default: Creating domain with the following settings... 00:07:28.843 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733402307_1c6245e58570b7df65f8 00:07:28.843 ==> default: -- Domain type: kvm 00:07:28.843 ==> default: -- Cpus: 10 00:07:28.843 ==> default: -- Feature: acpi 00:07:28.843 ==> default: -- Feature: apic 00:07:28.843 ==> default: -- Feature: pae 00:07:28.843 ==> default: -- Memory: 12288M 00:07:28.843 ==> default: -- Memory Backing: hugepages: 00:07:28.843 ==> default: -- Management MAC: 00:07:28.843 ==> default: -- Loader: 00:07:28.843 ==> default: -- Nvram: 00:07:28.843 ==> default: -- Base box: spdk/fedora39 00:07:28.843 ==> default: -- Storage pool: default 00:07:28.843 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733402307_1c6245e58570b7df65f8.img (20G) 00:07:28.843 ==> default: -- Volume Cache: default 00:07:28.843 ==> default: -- Kernel: 00:07:28.843 ==> default: -- Initrd: 00:07:28.843 ==> default: -- Graphics Type: vnc 00:07:28.843 ==> default: -- Graphics Port: -1 00:07:28.843 ==> default: -- Graphics IP: 127.0.0.1 00:07:28.843 ==> default: -- Graphics Password: Not defined 00:07:28.843 ==> default: -- Video Type: cirrus 00:07:28.843 ==> default: -- Video VRAM: 9216 00:07:28.843 ==> default: -- Sound Type: 00:07:28.843 ==> default: -- Keymap: en-us 00:07:28.843 ==> default: -- TPM Path: 00:07:28.843 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:28.843 ==> default: -- Command line args: 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:28.843 ==> default: -> value=-drive, 00:07:28.843 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:28.843 ==> default: -> value=-drive, 00:07:28.843 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:07:28.843 ==> default: -> value=-drive, 00:07:28.843 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:28.843 ==> default: -> value=-drive, 00:07:28.843 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:28.843 ==> default: -> value=-drive, 00:07:28.843 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:07:28.843 ==> default: -> value=-device, 00:07:28.843 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:07:28.843 ==> default: -> value=-drive, 00:07:28.844 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:07:28.844 ==> default: -> value=-device, 00:07:28.844 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:29.100 ==> default: Creating shared folders metadata... 00:07:29.100 ==> default: Starting domain. 00:07:31.631 ==> default: Waiting for domain to get an IP address... 00:07:49.733 ==> default: Waiting for SSH to become available... 00:07:49.733 ==> default: Configuring and enabling network interfaces... 00:07:52.267 default: SSH address: 192.168.121.42:22 00:07:52.267 default: SSH username: vagrant 00:07:52.267 default: SSH auth method: private key 00:07:54.166 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:08:02.363 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:08:07.636 ==> default: Mounting SSHFS shared folder... 00:08:09.008 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:08:09.008 ==> default: Checking Mount.. 00:08:09.965 ==> default: Folder Successfully Mounted! 00:08:09.965 00:08:09.965 SUCCESS! 00:08:09.965 00:08:09.965 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:08:09.966 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:09.966 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:08:09.966 00:08:09.973 [Pipeline] } 00:08:09.988 [Pipeline] // stage 00:08:09.998 [Pipeline] dir 00:08:09.999 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:08:10.001 [Pipeline] { 00:08:10.015 [Pipeline] catchError 00:08:10.016 [Pipeline] { 00:08:10.029 [Pipeline] sh 00:08:10.311 + vagrant ssh-config --host vagrant 00:08:10.311 + sed -ne '/^Host/,$p' 00:08:10.311 + tee ssh_conf 00:08:12.839 Host vagrant 00:08:12.839 HostName 192.168.121.42 00:08:12.839 User vagrant 00:08:12.839 Port 22 00:08:12.839 UserKnownHostsFile /dev/null 00:08:12.839 StrictHostKeyChecking no 00:08:12.839 PasswordAuthentication no 00:08:12.839 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:08:12.839 IdentitiesOnly yes 00:08:12.839 LogLevel FATAL 00:08:12.839 ForwardAgent yes 00:08:12.839 ForwardX11 yes 00:08:12.839 00:08:12.852 [Pipeline] withEnv 00:08:12.855 [Pipeline] { 00:08:12.898 [Pipeline] sh 00:08:13.174 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:08:13.174 source /etc/os-release 00:08:13.174 [[ -e /image.version ]] && img=$(< /image.version) 00:08:13.174 # Minimal, systemd-like check. 00:08:13.174 if [[ -e /.dockerenv ]]; then 00:08:13.174 # Clear garbage from the node'\''s name: 00:08:13.174 # agt-er_autotest_547-896 -> autotest_547-896 00:08:13.174 # $HOSTNAME is the actual container id 00:08:13.174 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:13.174 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:08:13.174 # We can assume this is a mount from a host where container is running, 00:08:13.174 # so fetch its hostname to easily identify the target swarm worker. 00:08:13.174 container="$(< /etc/hostname) ($agent)" 00:08:13.174 else 00:08:13.174 # Fallback 00:08:13.174 container=$agent 00:08:13.174 fi 00:08:13.174 fi 00:08:13.174 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:13.174 ' 00:08:13.183 [Pipeline] } 00:08:13.200 [Pipeline] // withEnv 00:08:13.208 [Pipeline] setCustomBuildProperty 00:08:13.226 [Pipeline] stage 00:08:13.229 [Pipeline] { (Tests) 00:08:13.249 [Pipeline] sh 00:08:13.527 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:13.792 [Pipeline] sh 00:08:14.062 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:08:14.334 [Pipeline] timeout 00:08:14.334 Timeout set to expire in 50 min 00:08:14.336 [Pipeline] { 00:08:14.350 [Pipeline] sh 00:08:14.627 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:08:15.191 HEAD is now at 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:08:15.203 [Pipeline] sh 00:08:15.482 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:08:15.752 [Pipeline] sh 00:08:16.028 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:16.040 [Pipeline] sh 00:08:16.341 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:08:16.341 ++ readlink -f spdk_repo 00:08:16.341 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:16.341 + [[ -n /home/vagrant/spdk_repo ]] 00:08:16.341 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:16.341 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:16.341 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:16.341 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:16.341 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:16.341 + [[ nvme-vg-autotest == pkgdep-* ]] 00:08:16.341 + cd /home/vagrant/spdk_repo 00:08:16.341 + source /etc/os-release 00:08:16.341 ++ NAME='Fedora Linux' 00:08:16.341 ++ VERSION='39 (Cloud Edition)' 00:08:16.341 ++ ID=fedora 00:08:16.341 ++ VERSION_ID=39 00:08:16.341 ++ VERSION_CODENAME= 00:08:16.341 ++ PLATFORM_ID=platform:f39 00:08:16.342 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:08:16.342 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:16.342 ++ LOGO=fedora-logo-icon 00:08:16.342 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:08:16.342 ++ HOME_URL=https://fedoraproject.org/ 00:08:16.342 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:08:16.342 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:16.342 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:16.342 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:16.342 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:08:16.342 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:16.342 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:08:16.342 ++ SUPPORT_END=2024-11-12 00:08:16.342 ++ VARIANT='Cloud Edition' 00:08:16.342 ++ VARIANT_ID=cloud 00:08:16.342 + uname -a 00:08:16.342 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:08:16.342 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:16.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:17.168 Hugepages 00:08:17.168 node hugesize free / total 00:08:17.168 node0 1048576kB 0 / 0 00:08:17.168 node0 2048kB 0 / 0 00:08:17.168 00:08:17.168 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:17.168 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:17.168 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:17.168 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:17.168 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:08:17.168 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:17.168 + rm -f /tmp/spdk-ld-path 00:08:17.168 + source autorun-spdk.conf 00:08:17.168 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:17.168 ++ SPDK_TEST_NVME=1 00:08:17.168 ++ SPDK_TEST_FTL=1 00:08:17.168 ++ SPDK_TEST_ISAL=1 00:08:17.168 ++ SPDK_RUN_ASAN=1 00:08:17.168 ++ SPDK_RUN_UBSAN=1 00:08:17.168 ++ SPDK_TEST_XNVME=1 00:08:17.168 ++ SPDK_TEST_NVME_FDP=1 00:08:17.168 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:08:17.168 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:08:17.168 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:17.168 ++ RUN_NIGHTLY=1 00:08:17.168 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:17.168 + [[ -n '' ]] 00:08:17.168 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:17.168 + for M in /var/spdk/build-*-manifest.txt 00:08:17.168 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:08:17.168 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:17.168 + for M in /var/spdk/build-*-manifest.txt 00:08:17.168 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:17.168 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:17.168 + for M in /var/spdk/build-*-manifest.txt 00:08:17.169 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:17.169 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:17.169 ++ uname 00:08:17.169 + [[ Linux == \L\i\n\u\x ]] 00:08:17.169 + sudo dmesg -T 00:08:17.169 + sudo dmesg --clear 00:08:17.169 + dmesg_pid=5764 00:08:17.169 + [[ Fedora Linux == FreeBSD ]] 00:08:17.169 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.169 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.169 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:17.169 + [[ -x /usr/src/fio-static/fio ]] 00:08:17.169 + sudo dmesg -Tw 00:08:17.169 + export FIO_BIN=/usr/src/fio-static/fio 00:08:17.169 + FIO_BIN=/usr/src/fio-static/fio 00:08:17.169 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:17.169 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:17.169 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:17.169 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.169 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.169 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:17.169 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.169 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.169 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:17.427 12:39:17 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:08:17.427 12:39:17 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:17.427 12:39:17 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:08:17.427 12:39:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:08:17.427 12:39:17 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:17.427 12:39:17 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:08:17.427 12:39:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.427 12:39:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:08:17.427 12:39:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:17.427 12:39:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.427 12:39:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.427 12:39:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.427 12:39:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.427 12:39:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.427 12:39:17 -- paths/export.sh@5 -- $ export PATH 00:08:17.427 12:39:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.427 12:39:17 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:17.427 12:39:17 -- common/autobuild_common.sh@493 -- $ date +%s 00:08:17.427 12:39:17 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733402357.XXXXXX 00:08:17.427 12:39:17 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733402357.fRaQb8 00:08:17.427 12:39:17 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:08:17.427 12:39:17 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:08:17.427 12:39:17 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:08:17.427 12:39:17 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:08:17.427 12:39:17 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:17.427 12:39:17 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:17.427 12:39:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:08:17.427 12:39:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:08:17.427 12:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:08:17.427 12:39:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:08:17.427 12:39:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:08:17.427 12:39:17 -- pm/common@17 -- $ local monitor 00:08:17.427 12:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:17.427 12:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:17.427 12:39:17 -- pm/common@25 -- $ sleep 1 00:08:17.427 12:39:17 -- pm/common@21 -- $ date +%s 00:08:17.427 12:39:17 -- pm/common@21 -- $ date +%s 00:08:17.427 12:39:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733402357 00:08:17.427 12:39:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733402357 00:08:17.427 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733402357_collect-cpu-load.pm.log 00:08:17.427 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733402357_collect-vmstat.pm.log 00:08:18.360 12:39:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:08:18.360 12:39:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:18.360 12:39:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:18.360 12:39:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:18.360 12:39:18 -- spdk/autobuild.sh@16 -- $ date -u 00:08:18.360 Thu Dec 5 12:39:18 PM UTC 2024 00:08:18.360 12:39:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:18.360 v25.01-pre-296-g8d3947977 00:08:18.360 12:39:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:08:18.360 12:39:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:08:18.360 12:39:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:08:18.360 12:39:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:18.360 12:39:18 -- common/autotest_common.sh@10 -- $ set +x 00:08:18.360 ************************************ 00:08:18.360 START TEST asan 00:08:18.360 ************************************ 00:08:18.360 using asan 00:08:18.360 12:39:18 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:08:18.360 00:08:18.360 real 0m0.000s 00:08:18.360 user 0m0.000s 00:08:18.360 sys 0m0.000s 00:08:18.360 ************************************ 00:08:18.360 END TEST asan 00:08:18.360 ************************************ 00:08:18.361 12:39:18 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:18.361 12:39:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:08:18.619 12:39:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:18.619 12:39:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:18.619 12:39:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:08:18.619 12:39:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:18.619 12:39:18 -- common/autotest_common.sh@10 -- $ set +x 00:08:18.619 ************************************ 00:08:18.619 START TEST ubsan 00:08:18.619 ************************************ 00:08:18.619 using ubsan 00:08:18.619 12:39:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:08:18.619 00:08:18.619 real 0m0.000s 00:08:18.619 user 0m0.000s 00:08:18.619 sys 0m0.000s 00:08:18.619 12:39:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:18.619 ************************************ 00:08:18.619 END TEST ubsan 00:08:18.619 ************************************ 00:08:18.619 12:39:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:08:18.619 12:39:18 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:08:18.619 12:39:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:08:18.619 12:39:18 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:08:18.619 12:39:18 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:08:18.619 12:39:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:18.619 12:39:18 -- common/autotest_common.sh@10 -- $ set +x 00:08:18.619 ************************************ 00:08:18.619 START TEST build_native_dpdk 00:08:18.619 ************************************ 00:08:18.619 12:39:18 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:08:18.619 eeb0605f11 version: 23.11.0 00:08:18.619 238778122a doc: update release notes for 23.11 00:08:18.619 46aa6b3cfc doc: fix description of RSS features 00:08:18.619 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:08:18.619 7e421ae345 devtools: support skipping forbid rule check 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:08:18.619 patching file config/rte_config.h 00:08:18.619 Hunk #1 succeeded at 60 (offset 1 line). 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:08:18.619 patching file lib/pcapng/rte_pcapng.c 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:08:18.619 12:39:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:08:18.619 12:39:18 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:08:18.620 12:39:18 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:08:18.620 12:39:18 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:08:18.620 12:39:18 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:08:23.932 The Meson build system 00:08:23.932 Version: 1.5.0 00:08:23.932 Source dir: /home/vagrant/spdk_repo/dpdk 00:08:23.932 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:08:23.932 Build type: native build 00:08:23.932 Program cat found: YES (/usr/bin/cat) 00:08:23.932 Project name: DPDK 00:08:23.932 Project version: 23.11.0 00:08:23.932 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:08:23.932 C linker for the host machine: gcc ld.bfd 2.40-14 00:08:23.932 Host machine cpu family: x86_64 00:08:23.932 Host machine cpu: x86_64 00:08:23.932 Message: ## Building in Developer Mode ## 00:08:23.932 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:23.932 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:08:23.932 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:08:23.932 Program python3 found: YES (/usr/bin/python3) 00:08:23.932 Program cat found: YES (/usr/bin/cat) 00:08:23.932 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:08:23.932 Compiler for C supports arguments -march=native: YES 00:08:23.932 Checking for size of "void *" : 8 00:08:23.932 Checking for size of "void *" : 8 (cached) 00:08:23.932 Library m found: YES 00:08:23.932 Library numa found: YES 00:08:23.932 Has header "numaif.h" : YES 00:08:23.932 Library fdt found: NO 00:08:23.932 Library execinfo found: NO 00:08:23.932 Has header "execinfo.h" : YES 00:08:23.932 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:23.932 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:23.932 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:23.932 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:23.932 Run-time dependency openssl found: YES 3.1.1 00:08:23.932 Run-time dependency libpcap found: YES 1.10.4 00:08:23.932 Has header "pcap.h" with dependency libpcap: YES 00:08:23.932 Compiler for C supports arguments -Wcast-qual: YES 00:08:23.932 Compiler for C supports arguments -Wdeprecated: YES 00:08:23.932 Compiler for C supports arguments -Wformat: YES 00:08:23.932 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:23.932 Compiler for C supports arguments -Wformat-security: NO 00:08:23.932 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:23.932 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:23.932 Compiler for C supports arguments -Wnested-externs: YES 00:08:23.932 Compiler for C supports arguments -Wold-style-definition: YES 00:08:23.932 Compiler for C supports arguments -Wpointer-arith: YES 00:08:23.932 Compiler for C supports arguments -Wsign-compare: YES 00:08:23.932 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:23.932 Compiler for C supports arguments -Wundef: YES 00:08:23.932 Compiler for C supports arguments -Wwrite-strings: YES 00:08:23.932 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:23.932 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:23.932 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:23.932 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:23.932 Program objdump found: YES (/usr/bin/objdump) 00:08:23.932 Compiler for C supports arguments -mavx512f: YES 00:08:23.932 Checking if "AVX512 checking" compiles: YES 00:08:23.932 Fetching value of define "__SSE4_2__" : 1 00:08:23.932 Fetching value of define "__AES__" : 1 00:08:23.932 Fetching value of define "__AVX__" : 1 00:08:23.932 Fetching value of define "__AVX2__" : 1 00:08:23.932 Fetching value of define "__AVX512BW__" : 1 00:08:23.932 Fetching value of define "__AVX512CD__" : 1 00:08:23.932 Fetching value of define "__AVX512DQ__" : 1 00:08:23.932 Fetching value of define "__AVX512F__" : 1 00:08:23.932 Fetching value of define "__AVX512VL__" : 1 00:08:23.932 Fetching value of define "__PCLMUL__" : 1 00:08:23.932 Fetching value of define "__RDRND__" : 1 00:08:23.932 Fetching value of define "__RDSEED__" : 1 00:08:23.932 Fetching value of define "__VPCLMULQDQ__" : 1 00:08:23.932 Fetching value of define "__znver1__" : (undefined) 00:08:23.932 Fetching value of define "__znver2__" : (undefined) 00:08:23.932 Fetching value of define "__znver3__" : (undefined) 00:08:23.932 Fetching value of define "__znver4__" : (undefined) 00:08:23.932 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:23.932 Message: lib/log: Defining dependency "log" 00:08:23.932 Message: lib/kvargs: Defining dependency "kvargs" 00:08:23.932 Message: lib/telemetry: Defining dependency "telemetry" 00:08:23.932 Checking for function "getentropy" : NO 00:08:23.932 Message: lib/eal: Defining dependency "eal" 00:08:23.932 Message: lib/ring: Defining dependency "ring" 00:08:23.932 Message: lib/rcu: Defining dependency "rcu" 00:08:23.932 Message: lib/mempool: Defining dependency "mempool" 00:08:23.932 Message: lib/mbuf: Defining dependency "mbuf" 00:08:23.932 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:23.932 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:08:23.932 Compiler for C supports arguments -mpclmul: YES 00:08:23.932 Compiler for C supports arguments -maes: YES 00:08:23.932 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:23.932 Compiler for C supports arguments -mavx512bw: YES 00:08:23.932 Compiler for C supports arguments -mavx512dq: YES 00:08:23.932 Compiler for C supports arguments -mavx512vl: YES 00:08:23.932 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:23.932 Compiler for C supports arguments -mavx2: YES 00:08:23.932 Compiler for C supports arguments -mavx: YES 00:08:23.932 Message: lib/net: Defining dependency "net" 00:08:23.932 Message: lib/meter: Defining dependency "meter" 00:08:23.932 Message: lib/ethdev: Defining dependency "ethdev" 00:08:23.932 Message: lib/pci: Defining dependency "pci" 00:08:23.932 Message: lib/cmdline: Defining dependency "cmdline" 00:08:23.932 Message: lib/metrics: Defining dependency "metrics" 00:08:23.932 Message: lib/hash: Defining dependency "hash" 00:08:23.932 Message: lib/timer: Defining dependency "timer" 00:08:23.932 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512CD__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:23.932 Message: lib/acl: Defining dependency "acl" 00:08:23.932 Message: lib/bbdev: Defining dependency "bbdev" 00:08:23.932 Message: lib/bitratestats: Defining dependency "bitratestats" 00:08:23.932 Run-time dependency libelf found: YES 0.191 00:08:23.932 Message: lib/bpf: Defining dependency "bpf" 00:08:23.932 Message: lib/cfgfile: Defining dependency "cfgfile" 00:08:23.932 Message: lib/compressdev: Defining dependency "compressdev" 00:08:23.932 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:23.932 Message: lib/distributor: Defining dependency "distributor" 00:08:23.932 Message: lib/dmadev: Defining dependency "dmadev" 00:08:23.932 Message: lib/efd: Defining dependency "efd" 00:08:23.932 Message: lib/eventdev: Defining dependency "eventdev" 00:08:23.932 Message: lib/dispatcher: Defining dependency "dispatcher" 00:08:23.932 Message: lib/gpudev: Defining dependency "gpudev" 00:08:23.932 Message: lib/gro: Defining dependency "gro" 00:08:23.932 Message: lib/gso: Defining dependency "gso" 00:08:23.932 Message: lib/ip_frag: Defining dependency "ip_frag" 00:08:23.932 Message: lib/jobstats: Defining dependency "jobstats" 00:08:23.932 Message: lib/latencystats: Defining dependency "latencystats" 00:08:23.932 Message: lib/lpm: Defining dependency "lpm" 00:08:23.932 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512IFMA__" : 1 00:08:23.932 Message: lib/member: Defining dependency "member" 00:08:23.932 Message: lib/pcapng: Defining dependency "pcapng" 00:08:23.932 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:23.932 Message: lib/power: Defining dependency "power" 00:08:23.932 Message: lib/rawdev: Defining dependency "rawdev" 00:08:23.932 Message: lib/regexdev: Defining dependency "regexdev" 00:08:23.932 Message: lib/mldev: Defining dependency "mldev" 00:08:23.932 Message: lib/rib: Defining dependency "rib" 00:08:23.932 Message: lib/reorder: Defining dependency "reorder" 00:08:23.932 Message: lib/sched: Defining dependency "sched" 00:08:23.932 Message: lib/security: Defining dependency "security" 00:08:23.932 Message: lib/stack: Defining dependency "stack" 00:08:23.932 Has header "linux/userfaultfd.h" : YES 00:08:23.932 Has header "linux/vduse.h" : YES 00:08:23.932 Message: lib/vhost: Defining dependency "vhost" 00:08:23.932 Message: lib/ipsec: Defining dependency "ipsec" 00:08:23.932 Message: lib/pdcp: Defining dependency "pdcp" 00:08:23.932 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:23.932 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:23.933 Message: lib/fib: Defining dependency "fib" 00:08:23.933 Message: lib/port: Defining dependency "port" 00:08:23.933 Message: lib/pdump: Defining dependency "pdump" 00:08:23.933 Message: lib/table: Defining dependency "table" 00:08:23.933 Message: lib/pipeline: Defining dependency "pipeline" 00:08:23.933 Message: lib/graph: Defining dependency "graph" 00:08:23.933 Message: lib/node: Defining dependency "node" 00:08:23.933 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:23.933 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:23.933 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:23.933 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:24.499 Compiler for C supports arguments -Wno-sign-compare: YES 00:08:24.499 Compiler for C supports arguments -Wno-unused-value: YES 00:08:24.499 Compiler for C supports arguments -Wno-format: YES 00:08:24.499 Compiler for C supports arguments -Wno-format-security: YES 00:08:24.499 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:08:24.499 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:08:24.499 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:08:24.499 Compiler for C supports arguments -Wno-unused-parameter: YES 00:08:24.499 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:24.499 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:24.499 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:24.499 Compiler for C supports arguments -mavx512bw: YES (cached) 00:08:24.499 Compiler for C supports arguments -march=skylake-avx512: YES 00:08:24.499 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:08:24.499 Has header "sys/epoll.h" : YES 00:08:24.499 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:24.499 Configuring doxy-api-html.conf using configuration 00:08:24.499 Configuring doxy-api-man.conf using configuration 00:08:24.499 Program mandb found: YES (/usr/bin/mandb) 00:08:24.499 Program sphinx-build found: NO 00:08:24.499 Configuring rte_build_config.h using configuration 00:08:24.499 Message: 00:08:24.499 ================= 00:08:24.499 Applications Enabled 00:08:24.499 ================= 00:08:24.499 00:08:24.499 apps: 00:08:24.499 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:08:24.499 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:08:24.499 test-pmd, test-regex, test-sad, test-security-perf, 00:08:24.499 00:08:24.499 Message: 00:08:24.499 ================= 00:08:24.499 Libraries Enabled 00:08:24.499 ================= 00:08:24.499 00:08:24.499 libs: 00:08:24.499 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:24.499 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:08:24.499 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:08:24.499 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:08:24.499 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:08:24.499 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:08:24.499 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:08:24.499 00:08:24.499 00:08:24.499 Message: 00:08:24.499 =============== 00:08:24.499 Drivers Enabled 00:08:24.499 =============== 00:08:24.499 00:08:24.499 common: 00:08:24.499 00:08:24.499 bus: 00:08:24.499 pci, vdev, 00:08:24.499 mempool: 00:08:24.499 ring, 00:08:24.499 dma: 00:08:24.499 00:08:24.499 net: 00:08:24.499 i40e, 00:08:24.499 raw: 00:08:24.499 00:08:24.500 crypto: 00:08:24.500 00:08:24.500 compress: 00:08:24.500 00:08:24.500 regex: 00:08:24.500 00:08:24.500 ml: 00:08:24.500 00:08:24.500 vdpa: 00:08:24.500 00:08:24.500 event: 00:08:24.500 00:08:24.500 baseband: 00:08:24.500 00:08:24.500 gpu: 00:08:24.500 00:08:24.500 00:08:24.500 Message: 00:08:24.500 ================= 00:08:24.500 Content Skipped 00:08:24.500 ================= 00:08:24.500 00:08:24.500 apps: 00:08:24.500 00:08:24.500 libs: 00:08:24.500 00:08:24.500 drivers: 00:08:24.500 common/cpt: not in enabled drivers build config 00:08:24.500 common/dpaax: not in enabled drivers build config 00:08:24.500 common/iavf: not in enabled drivers build config 00:08:24.500 common/idpf: not in enabled drivers build config 00:08:24.500 common/mvep: not in enabled drivers build config 00:08:24.500 common/octeontx: not in enabled drivers build config 00:08:24.500 bus/auxiliary: not in enabled drivers build config 00:08:24.500 bus/cdx: not in enabled drivers build config 00:08:24.500 bus/dpaa: not in enabled drivers build config 00:08:24.500 bus/fslmc: not in enabled drivers build config 00:08:24.500 bus/ifpga: not in enabled drivers build config 00:08:24.500 bus/platform: not in enabled drivers build config 00:08:24.500 bus/vmbus: not in enabled drivers build config 00:08:24.500 common/cnxk: not in enabled drivers build config 00:08:24.500 common/mlx5: not in enabled drivers build config 00:08:24.500 common/nfp: not in enabled drivers build config 00:08:24.500 common/qat: not in enabled drivers build config 00:08:24.500 common/sfc_efx: not in enabled drivers build config 00:08:24.500 mempool/bucket: not in enabled drivers build config 00:08:24.500 mempool/cnxk: not in enabled drivers build config 00:08:24.500 mempool/dpaa: not in enabled drivers build config 00:08:24.500 mempool/dpaa2: not in enabled drivers build config 00:08:24.500 mempool/octeontx: not in enabled drivers build config 00:08:24.500 mempool/stack: not in enabled drivers build config 00:08:24.500 dma/cnxk: not in enabled drivers build config 00:08:24.500 dma/dpaa: not in enabled drivers build config 00:08:24.500 dma/dpaa2: not in enabled drivers build config 00:08:24.500 dma/hisilicon: not in enabled drivers build config 00:08:24.500 dma/idxd: not in enabled drivers build config 00:08:24.500 dma/ioat: not in enabled drivers build config 00:08:24.500 dma/skeleton: not in enabled drivers build config 00:08:24.500 net/af_packet: not in enabled drivers build config 00:08:24.500 net/af_xdp: not in enabled drivers build config 00:08:24.500 net/ark: not in enabled drivers build config 00:08:24.500 net/atlantic: not in enabled drivers build config 00:08:24.500 net/avp: not in enabled drivers build config 00:08:24.500 net/axgbe: not in enabled drivers build config 00:08:24.500 net/bnx2x: not in enabled drivers build config 00:08:24.500 net/bnxt: not in enabled drivers build config 00:08:24.500 net/bonding: not in enabled drivers build config 00:08:24.500 net/cnxk: not in enabled drivers build config 00:08:24.500 net/cpfl: not in enabled drivers build config 00:08:24.500 net/cxgbe: not in enabled drivers build config 00:08:24.500 net/dpaa: not in enabled drivers build config 00:08:24.500 net/dpaa2: not in enabled drivers build config 00:08:24.500 net/e1000: not in enabled drivers build config 00:08:24.500 net/ena: not in enabled drivers build config 00:08:24.500 net/enetc: not in enabled drivers build config 00:08:24.500 net/enetfec: not in enabled drivers build config 00:08:24.500 net/enic: not in enabled drivers build config 00:08:24.500 net/failsafe: not in enabled drivers build config 00:08:24.500 net/fm10k: not in enabled drivers build config 00:08:24.500 net/gve: not in enabled drivers build config 00:08:24.500 net/hinic: not in enabled drivers build config 00:08:24.500 net/hns3: not in enabled drivers build config 00:08:24.500 net/iavf: not in enabled drivers build config 00:08:24.500 net/ice: not in enabled drivers build config 00:08:24.500 net/idpf: not in enabled drivers build config 00:08:24.500 net/igc: not in enabled drivers build config 00:08:24.500 net/ionic: not in enabled drivers build config 00:08:24.500 net/ipn3ke: not in enabled drivers build config 00:08:24.500 net/ixgbe: not in enabled drivers build config 00:08:24.500 net/mana: not in enabled drivers build config 00:08:24.500 net/memif: not in enabled drivers build config 00:08:24.500 net/mlx4: not in enabled drivers build config 00:08:24.500 net/mlx5: not in enabled drivers build config 00:08:24.500 net/mvneta: not in enabled drivers build config 00:08:24.500 net/mvpp2: not in enabled drivers build config 00:08:24.500 net/netvsc: not in enabled drivers build config 00:08:24.500 net/nfb: not in enabled drivers build config 00:08:24.500 net/nfp: not in enabled drivers build config 00:08:24.500 net/ngbe: not in enabled drivers build config 00:08:24.500 net/null: not in enabled drivers build config 00:08:24.500 net/octeontx: not in enabled drivers build config 00:08:24.500 net/octeon_ep: not in enabled drivers build config 00:08:24.500 net/pcap: not in enabled drivers build config 00:08:24.500 net/pfe: not in enabled drivers build config 00:08:24.500 net/qede: not in enabled drivers build config 00:08:24.500 net/ring: not in enabled drivers build config 00:08:24.500 net/sfc: not in enabled drivers build config 00:08:24.500 net/softnic: not in enabled drivers build config 00:08:24.500 net/tap: not in enabled drivers build config 00:08:24.500 net/thunderx: not in enabled drivers build config 00:08:24.500 net/txgbe: not in enabled drivers build config 00:08:24.500 net/vdev_netvsc: not in enabled drivers build config 00:08:24.500 net/vhost: not in enabled drivers build config 00:08:24.500 net/virtio: not in enabled drivers build config 00:08:24.500 net/vmxnet3: not in enabled drivers build config 00:08:24.500 raw/cnxk_bphy: not in enabled drivers build config 00:08:24.500 raw/cnxk_gpio: not in enabled drivers build config 00:08:24.500 raw/dpaa2_cmdif: not in enabled drivers build config 00:08:24.500 raw/ifpga: not in enabled drivers build config 00:08:24.500 raw/ntb: not in enabled drivers build config 00:08:24.500 raw/skeleton: not in enabled drivers build config 00:08:24.500 crypto/armv8: not in enabled drivers build config 00:08:24.500 crypto/bcmfs: not in enabled drivers build config 00:08:24.500 crypto/caam_jr: not in enabled drivers build config 00:08:24.500 crypto/ccp: not in enabled drivers build config 00:08:24.500 crypto/cnxk: not in enabled drivers build config 00:08:24.500 crypto/dpaa_sec: not in enabled drivers build config 00:08:24.500 crypto/dpaa2_sec: not in enabled drivers build config 00:08:24.500 crypto/ipsec_mb: not in enabled drivers build config 00:08:24.500 crypto/mlx5: not in enabled drivers build config 00:08:24.500 crypto/mvsam: not in enabled drivers build config 00:08:24.500 crypto/nitrox: not in enabled drivers build config 00:08:24.500 crypto/null: not in enabled drivers build config 00:08:24.500 crypto/octeontx: not in enabled drivers build config 00:08:24.500 crypto/openssl: not in enabled drivers build config 00:08:24.500 crypto/scheduler: not in enabled drivers build config 00:08:24.500 crypto/uadk: not in enabled drivers build config 00:08:24.500 crypto/virtio: not in enabled drivers build config 00:08:24.500 compress/isal: not in enabled drivers build config 00:08:24.500 compress/mlx5: not in enabled drivers build config 00:08:24.500 compress/octeontx: not in enabled drivers build config 00:08:24.500 compress/zlib: not in enabled drivers build config 00:08:24.500 regex/mlx5: not in enabled drivers build config 00:08:24.500 regex/cn9k: not in enabled drivers build config 00:08:24.500 ml/cnxk: not in enabled drivers build config 00:08:24.500 vdpa/ifc: not in enabled drivers build config 00:08:24.500 vdpa/mlx5: not in enabled drivers build config 00:08:24.500 vdpa/nfp: not in enabled drivers build config 00:08:24.500 vdpa/sfc: not in enabled drivers build config 00:08:24.500 event/cnxk: not in enabled drivers build config 00:08:24.500 event/dlb2: not in enabled drivers build config 00:08:24.500 event/dpaa: not in enabled drivers build config 00:08:24.500 event/dpaa2: not in enabled drivers build config 00:08:24.500 event/dsw: not in enabled drivers build config 00:08:24.500 event/opdl: not in enabled drivers build config 00:08:24.500 event/skeleton: not in enabled drivers build config 00:08:24.500 event/sw: not in enabled drivers build config 00:08:24.500 event/octeontx: not in enabled drivers build config 00:08:24.500 baseband/acc: not in enabled drivers build config 00:08:24.500 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:08:24.500 baseband/fpga_lte_fec: not in enabled drivers build config 00:08:24.500 baseband/la12xx: not in enabled drivers build config 00:08:24.500 baseband/null: not in enabled drivers build config 00:08:24.500 baseband/turbo_sw: not in enabled drivers build config 00:08:24.500 gpu/cuda: not in enabled drivers build config 00:08:24.500 00:08:24.500 00:08:24.500 Build targets in project: 215 00:08:24.500 00:08:24.500 DPDK 23.11.0 00:08:24.500 00:08:24.500 User defined options 00:08:24.500 libdir : lib 00:08:24.500 prefix : /home/vagrant/spdk_repo/dpdk/build 00:08:24.500 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:08:24.500 c_link_args : 00:08:24.500 enable_docs : false 00:08:24.500 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:08:24.500 enable_kmods : false 00:08:24.500 machine : native 00:08:24.500 tests : false 00:08:24.500 00:08:24.500 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:24.500 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:08:24.500 12:39:24 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:08:24.500 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:08:24.759 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:24.759 [2/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:24.759 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:24.759 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:24.759 [5/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:24.759 [6/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:24.759 [7/705] Linking static target lib/librte_kvargs.a 00:08:24.759 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:24.759 [9/705] Linking static target lib/librte_log.a 00:08:24.759 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:24.759 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:25.019 [12/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:25.019 [13/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.020 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:25.020 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:25.020 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:25.020 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.278 [18/705] Linking target lib/librte_log.so.24.0 00:08:25.278 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:25.278 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:25.278 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:25.278 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:25.278 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:25.278 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:25.536 [25/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:08:25.536 [26/705] Linking target lib/librte_kvargs.so.24.0 00:08:25.536 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:25.536 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:25.536 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:25.536 [30/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:08:25.536 [31/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:25.536 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:25.536 [33/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:25.536 [34/705] Linking static target lib/librte_telemetry.a 00:08:25.536 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:25.794 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:25.794 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:25.794 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:25.794 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:25.794 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:25.794 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:26.052 [42/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:26.052 [43/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.052 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:26.052 [45/705] Linking target lib/librte_telemetry.so.24.0 00:08:26.052 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:26.052 [47/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:08:26.310 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:26.310 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:26.310 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:26.310 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:26.310 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:26.310 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:26.310 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:26.310 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:26.310 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:26.568 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:26.568 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:26.568 [59/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:26.568 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:26.568 [61/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:26.568 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:26.568 [63/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:26.568 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:26.568 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:26.568 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:26.826 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:26.826 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:26.826 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:27.084 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:27.084 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:27.084 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:27.084 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:27.084 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:27.084 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:27.084 [76/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:27.084 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:27.084 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:27.343 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:27.343 [80/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:27.343 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:27.343 [82/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:27.343 [83/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:27.600 [84/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:27.601 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:27.601 [86/705] Linking static target lib/librte_eal.a 00:08:27.601 [87/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:27.601 [88/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:27.601 [89/705] Linking static target lib/librte_ring.a 00:08:27.601 [90/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:27.601 [91/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:27.984 [92/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:27.984 [93/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.242 [94/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:28.242 [95/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:28.242 [96/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:28.242 [97/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:28.242 [98/705] Linking static target lib/librte_rcu.a 00:08:28.242 [99/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:28.242 [100/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:28.242 [101/705] Linking static target lib/librte_mbuf.a 00:08:28.242 [102/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:28.502 [103/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.502 [104/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:28.502 [105/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:08:28.502 [106/705] Linking static target lib/librte_net.a 00:08:28.502 [107/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:28.502 [108/705] Linking static target lib/librte_meter.a 00:08:28.502 [109/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:28.502 [110/705] Linking static target lib/librte_mempool.a 00:08:28.502 [111/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:28.502 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:28.760 [113/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.760 [114/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.760 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:28.760 [116/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:28.760 [117/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.018 [118/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:29.018 [119/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.276 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:29.545 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:29.545 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:29.545 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:29.545 [124/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:29.545 [125/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:29.545 [126/705] Linking static target lib/librte_pci.a 00:08:29.545 [127/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:29.811 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:29.811 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:29.811 [130/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:29.811 [131/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.811 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:29.811 [133/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:29.811 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:29.811 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:29.811 [136/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:29.811 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:29.811 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:29.811 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:29.811 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:30.070 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:30.070 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:30.070 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:30.070 [144/705] Linking static target lib/librte_cmdline.a 00:08:30.070 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:08:30.329 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:08:30.329 [147/705] Linking static target lib/librte_metrics.a 00:08:30.329 [148/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:30.603 [149/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:30.603 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:08:30.603 [151/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:30.860 [152/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:30.860 [153/705] Linking static target lib/librte_timer.a 00:08:30.860 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:31.117 [155/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:08:31.117 [156/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:08:31.117 [157/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:31.374 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:08:31.374 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:08:31.631 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:08:31.631 [161/705] Linking static target lib/librte_bitratestats.a 00:08:31.631 [162/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:08:31.888 [163/705] Linking static target lib/librte_bbdev.a 00:08:31.888 [164/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:31.888 [165/705] Linking static target lib/librte_hash.a 00:08:31.888 [166/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:08:31.888 [167/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:08:31.888 [168/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.145 [169/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:08:32.145 [170/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.145 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:08:32.145 [172/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.402 [173/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:08:32.402 [174/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:08:32.658 [175/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:08:32.658 [176/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:08:32.658 [177/705] Linking static target lib/acl/libavx2_tmp.a 00:08:32.658 [178/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:08:32.658 [179/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:32.916 [180/705] Linking static target lib/librte_ethdev.a 00:08:32.916 [181/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:08:32.916 [182/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:08:32.916 [183/705] Linking static target lib/librte_cfgfile.a 00:08:32.916 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:08:32.916 [185/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.916 [186/705] Linking target lib/librte_eal.so.24.0 00:08:33.174 [187/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:33.174 [188/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:08:33.174 [189/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.174 [190/705] Linking target lib/librte_ring.so.24.0 00:08:33.174 [191/705] Linking target lib/librte_meter.so.24.0 00:08:33.174 [192/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:08:33.174 [193/705] Linking target lib/librte_rcu.so.24.0 00:08:33.174 [194/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:08:33.431 [195/705] Linking target lib/librte_mempool.so.24.0 00:08:33.431 [196/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:33.431 [197/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:08:33.431 [198/705] Linking target lib/librte_timer.so.24.0 00:08:33.431 [199/705] Linking target lib/librte_pci.so.24.0 00:08:33.431 [200/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:08:33.431 [201/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:33.431 [202/705] Linking static target lib/librte_compressdev.a 00:08:33.432 [203/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:33.432 [204/705] Linking target lib/librte_cfgfile.so.24.0 00:08:33.432 [205/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:08:33.432 [206/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:08:33.432 [207/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:08:33.432 [208/705] Linking static target lib/librte_bpf.a 00:08:33.432 [209/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:08:33.432 [210/705] Linking target lib/librte_mbuf.so.24.0 00:08:33.432 [211/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:33.689 [212/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:08:33.689 [213/705] Linking target lib/librte_net.so.24.0 00:08:33.689 [214/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.689 [215/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:08:33.689 [216/705] Linking target lib/librte_bbdev.so.24.0 00:08:33.946 [217/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:08:33.946 [218/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:08:33.946 [219/705] Linking target lib/librte_cmdline.so.24.0 00:08:33.946 [220/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.946 [221/705] Linking target lib/librte_hash.so.24.0 00:08:33.946 [222/705] Linking target lib/librte_compressdev.so.24.0 00:08:33.946 [223/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:08:33.946 [224/705] Linking static target lib/librte_acl.a 00:08:33.946 [225/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:08:33.946 [226/705] Linking static target lib/librte_distributor.a 00:08:33.946 [227/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:08:34.203 [228/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:34.203 [229/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:08:34.203 [230/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.203 [231/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.203 [232/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:34.203 [233/705] Linking static target lib/librte_dmadev.a 00:08:34.203 [234/705] Linking target lib/librte_distributor.so.24.0 00:08:34.460 [235/705] Linking target lib/librte_acl.so.24.0 00:08:34.460 [236/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:08:34.460 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:08:34.718 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.718 [239/705] Linking target lib/librte_dmadev.so.24.0 00:08:34.718 [240/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:08:34.976 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:08:34.976 [242/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:08:35.233 [243/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:35.233 [244/705] Linking static target lib/librte_cryptodev.a 00:08:35.233 [245/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:08:35.233 [246/705] Linking static target lib/librte_efd.a 00:08:35.490 [247/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:08:35.490 [248/705] Linking static target lib/librte_dispatcher.a 00:08:35.490 [249/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.490 [250/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:08:35.490 [251/705] Linking target lib/librte_efd.so.24.0 00:08:35.490 [252/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:08:35.746 [253/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:08:35.746 [254/705] Linking static target lib/librte_gpudev.a 00:08:35.746 [255/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:08:35.746 [256/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.002 [257/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:08:36.002 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:08:36.002 [259/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:08:36.002 [260/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:08:36.259 [261/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.259 [262/705] Linking target lib/librte_cryptodev.so.24.0 00:08:36.259 [263/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:08:36.259 [264/705] Linking static target lib/librte_eventdev.a 00:08:36.259 [265/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:08:36.259 [266/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:08:36.259 [267/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.517 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:08:36.517 [269/705] Linking target lib/librte_gpudev.so.24.0 00:08:36.517 [270/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:08:36.517 [271/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:08:36.517 [272/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:08:36.517 [273/705] Linking static target lib/librte_gro.a 00:08:36.517 [274/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:08:36.774 [275/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:08:36.774 [276/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:08:36.774 [277/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.774 [278/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:08:36.774 [279/705] Linking static target lib/librte_gso.a 00:08:36.774 [280/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:08:36.774 [281/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:08:37.031 [282/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:08:37.031 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:08:37.031 [284/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.031 [285/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.031 [286/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:08:37.031 [287/705] Linking static target lib/librte_jobstats.a 00:08:37.031 [288/705] Linking target lib/librte_ethdev.so.24.0 00:08:37.289 [289/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:08:37.289 [290/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:08:37.289 [291/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:08:37.289 [292/705] Linking target lib/librte_metrics.so.24.0 00:08:37.289 [293/705] Linking target lib/librte_bpf.so.24.0 00:08:37.289 [294/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:08:37.289 [295/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:08:37.289 [296/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.289 [297/705] Linking static target lib/librte_ip_frag.a 00:08:37.289 [298/705] Linking target lib/librte_gro.so.24.0 00:08:37.289 [299/705] Linking target lib/librte_gso.so.24.0 00:08:37.289 [300/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:08:37.289 [301/705] Linking static target lib/librte_latencystats.a 00:08:37.546 [302/705] Linking target lib/librte_jobstats.so.24.0 00:08:37.546 [303/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:08:37.546 [304/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:08:37.546 [305/705] Linking target lib/librte_bitratestats.so.24.0 00:08:37.546 [306/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:37.546 [307/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.546 [308/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.546 [309/705] Linking target lib/librte_latencystats.so.24.0 00:08:37.803 [310/705] Linking target lib/librte_ip_frag.so.24.0 00:08:37.803 [311/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:08:37.803 [312/705] Linking static target lib/librte_lpm.a 00:08:37.803 [313/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:08:37.803 [314/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:37.803 [315/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:08:37.803 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:08:38.061 [317/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:38.061 [318/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:38.061 [319/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.061 [320/705] Linking target lib/librte_lpm.so.24.0 00:08:38.061 [321/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.061 [322/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:38.061 [323/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:08:38.061 [324/705] Linking static target lib/librte_pcapng.a 00:08:38.061 [325/705] Linking target lib/librte_eventdev.so.24.0 00:08:38.061 [326/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:08:38.061 [327/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:08:38.319 [328/705] Linking target lib/librte_dispatcher.so.24.0 00:08:38.319 [329/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:38.319 [330/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.319 [331/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:38.319 [332/705] Linking target lib/librte_pcapng.so.24.0 00:08:38.319 [333/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:38.319 [334/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:08:38.319 [335/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:08:38.319 [336/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:38.319 [337/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:38.590 [338/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:38.590 [339/705] Linking static target lib/librte_power.a 00:08:38.590 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:08:38.590 [341/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:08:38.590 [342/705] Linking static target lib/librte_regexdev.a 00:08:38.590 [343/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:08:38.590 [344/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:08:38.590 [345/705] Linking static target lib/librte_rawdev.a 00:08:38.879 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:08:38.879 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:08:38.879 [348/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:08:38.879 [349/705] Linking static target lib/librte_mldev.a 00:08:38.879 [350/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:08:38.879 [351/705] Linking static target lib/librte_member.a 00:08:38.879 [352/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:08:38.879 [353/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:08:39.138 [354/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.138 [355/705] Linking target lib/librte_power.so.24.0 00:08:39.138 [356/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:08:39.138 [357/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.138 [358/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.138 [359/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:39.138 [360/705] Linking static target lib/librte_reorder.a 00:08:39.138 [361/705] Linking target lib/librte_member.so.24.0 00:08:39.138 [362/705] Linking target lib/librte_rawdev.so.24.0 00:08:39.138 [363/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:08:39.395 [364/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:08:39.395 [365/705] Linking static target lib/librte_rib.a 00:08:39.395 [366/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.395 [367/705] Linking target lib/librte_regexdev.so.24.0 00:08:39.395 [368/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:39.395 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:08:39.395 [370/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.395 [371/705] Linking target lib/librte_reorder.so.24.0 00:08:39.395 [372/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:08:39.395 [373/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:08:39.395 [374/705] Linking static target lib/librte_stack.a 00:08:39.654 [375/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:08:39.654 [376/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.654 [377/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.654 [378/705] Linking target lib/librte_rib.so.24.0 00:08:39.654 [379/705] Linking target lib/librte_stack.so.24.0 00:08:39.654 [380/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:39.654 [381/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:39.654 [382/705] Linking static target lib/librte_security.a 00:08:39.912 [383/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:08:39.912 [384/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.912 [385/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:39.912 [386/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:39.912 [387/705] Linking target lib/librte_mldev.so.24.0 00:08:39.912 [388/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:08:39.912 [389/705] Linking static target lib/librte_sched.a 00:08:40.169 [390/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.169 [391/705] Linking target lib/librte_security.so.24.0 00:08:40.169 [392/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:40.169 [393/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:08:40.427 [394/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.427 [395/705] Linking target lib/librte_sched.so.24.0 00:08:40.427 [396/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:40.427 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:40.427 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:08:40.684 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:08:40.943 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:08:40.943 [401/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:08:40.943 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:08:40.943 [403/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:41.201 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:08:41.201 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:08:41.201 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:08:41.201 [407/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:08:41.201 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:08:41.458 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:08:41.458 [410/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:08:41.458 [411/705] Linking static target lib/librte_ipsec.a 00:08:41.458 [412/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:08:41.458 [413/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:08:41.717 [414/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:08:41.717 [415/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:08:41.717 [416/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:08:41.717 [417/705] Linking target lib/librte_ipsec.so.24.0 00:08:41.975 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:08:41.975 [419/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:08:41.975 [420/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:08:41.975 [421/705] Linking static target lib/librte_fib.a 00:08:42.234 [422/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:08:42.234 [423/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:08:42.234 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:08:42.234 [425/705] Linking target lib/librte_fib.so.24.0 00:08:42.234 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:08:42.234 [427/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:08:42.234 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:08:42.799 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:08:42.799 [430/705] Linking static target lib/librte_pdcp.a 00:08:42.799 [431/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:08:42.799 [432/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:08:42.799 [433/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:08:42.799 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:08:43.057 [435/705] Linking target lib/librte_pdcp.so.24.0 00:08:43.057 [436/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:08:43.057 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:08:43.057 [438/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:08:43.057 [439/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:08:43.315 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:08:43.315 [441/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:08:43.315 [442/705] Linking static target lib/librte_port.a 00:08:43.315 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:08:43.315 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:08:43.574 [445/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:08:43.574 [446/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:08:43.574 [447/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:08:43.574 [448/705] Linking static target lib/librte_pdump.a 00:08:43.574 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:08:43.574 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:08:43.832 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:08:43.832 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.832 [453/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.832 [454/705] Linking target lib/librte_pdump.so.24.0 00:08:43.832 [455/705] Linking target lib/librte_port.so.24.0 00:08:43.832 [456/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:08:44.089 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:08:44.089 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:08:44.089 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:08:44.089 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:08:44.347 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:08:44.347 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:08:44.347 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:08:44.605 [464/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:08:44.605 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:08:44.605 [466/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:08:44.605 [467/705] Linking static target lib/librte_table.a 00:08:44.863 [468/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:44.863 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:08:45.121 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.121 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:08:45.121 [472/705] Linking target lib/librte_table.so.24.0 00:08:45.121 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:08:45.121 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:08:45.391 [475/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:08:45.391 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:08:45.651 [477/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:08:45.651 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:08:45.651 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:08:45.651 [480/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:08:45.651 [481/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:08:45.908 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:08:45.908 [483/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:08:46.166 [484/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:08:46.166 [485/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:08:46.166 [486/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:08:46.166 [487/705] Linking static target lib/librte_graph.a 00:08:46.166 [488/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:08:46.423 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:08:46.423 [490/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:08:46.680 [491/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:08:46.680 [492/705] Linking target lib/librte_graph.so.24.0 00:08:46.938 [493/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:08:46.938 [494/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:08:46.938 [495/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:08:46.938 [496/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:08:46.938 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:08:46.938 [498/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:08:46.938 [499/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:08:46.938 [500/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:46.938 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:08:47.196 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:08:47.196 [503/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:47.452 [504/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:08:47.452 [505/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:08:47.453 [506/705] Linking static target lib/librte_node.a 00:08:47.453 [507/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:47.453 [508/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:47.453 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:47.453 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:47.710 [511/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:47.710 [512/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:47.710 [513/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.710 [514/705] Linking target lib/librte_node.so.24.0 00:08:47.710 [515/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:47.710 [516/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:47.710 [517/705] Linking static target drivers/librte_bus_vdev.a 00:08:47.967 [518/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:47.967 [519/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:47.967 [520/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:08:47.967 [521/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:08:47.967 [522/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:08:47.967 [523/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:47.967 [524/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.967 [525/705] Linking target drivers/librte_bus_vdev.so.24.0 00:08:47.967 [526/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:47.967 [527/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:48.224 [528/705] Linking static target drivers/librte_bus_pci.a 00:08:48.224 [529/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:08:48.224 [530/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:48.224 [531/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:48.224 [532/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:48.224 [533/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:08:48.481 [534/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:48.481 [535/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:48.481 [536/705] Linking static target drivers/librte_mempool_ring.a 00:08:48.481 [537/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:48.481 [538/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:48.481 [539/705] Linking target drivers/librte_mempool_ring.so.24.0 00:08:48.481 [540/705] Linking target drivers/librte_bus_pci.so.24.0 00:08:48.737 [541/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:08:48.737 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:08:48.737 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:08:49.697 [544/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:08:49.697 [545/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:08:49.697 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:08:49.697 [547/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:08:49.697 [548/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:08:49.697 [549/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:08:49.697 [550/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:08:49.697 [551/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:08:49.955 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:08:49.955 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:08:50.234 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:08:50.234 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:08:50.234 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:08:50.491 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:08:50.491 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:08:50.748 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:08:50.748 [560/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:08:50.748 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:08:51.006 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:08:51.006 [563/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:08:51.006 [564/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:08:51.263 [565/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:08:51.263 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:08:51.263 [567/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:08:51.263 [568/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:08:51.263 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:08:51.263 [570/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:08:51.521 [571/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:08:51.521 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:08:51.521 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:08:51.778 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:08:51.778 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:08:51.778 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:08:51.778 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:08:52.036 [578/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:08:52.036 [579/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:08:52.294 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:08:52.294 [581/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:08:52.551 [582/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:08:52.551 [583/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:08:52.551 [584/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:08:52.808 [585/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:08:52.808 [586/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:08:52.808 [587/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:08:52.808 [588/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:08:53.065 [589/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:08:53.065 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:08:53.065 [591/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:08:53.065 [592/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:08:53.065 [593/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:08:53.065 [594/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:08:53.065 [595/705] Linking static target drivers/librte_net_i40e.a 00:08:53.322 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:08:53.322 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:08:53.580 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:08:53.580 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:08:53.580 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:08:53.580 [601/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:08:53.580 [602/705] Linking target drivers/librte_net_i40e.so.24.0 00:08:53.580 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:08:53.839 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:08:53.839 [605/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:53.839 [606/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:08:53.839 [607/705] Linking static target lib/librte_vhost.a 00:08:53.839 [608/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:08:53.839 [609/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:08:53.839 [610/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:08:53.839 [611/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:08:54.096 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:08:54.096 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:08:54.354 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:08:54.354 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:08:54.612 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:08:54.612 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:54.869 [618/705] Linking target lib/librte_vhost.so.24.0 00:08:54.869 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:08:54.870 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:08:55.128 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:08:55.128 [622/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:08:55.128 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:08:55.128 [624/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:08:55.386 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:08:55.386 [626/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:08:55.386 [627/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:08:55.386 [628/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:08:55.386 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:08:55.642 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:08:55.642 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:08:55.642 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:08:55.642 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:08:55.900 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:08:55.900 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:08:55.900 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:08:55.900 [637/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:08:55.900 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:08:56.157 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:08:56.157 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:08:56.157 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:08:56.157 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:08:56.157 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:08:56.157 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:08:56.414 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:08:56.414 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:08:56.414 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:08:56.671 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:08:56.671 [649/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:08:56.671 [650/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:08:56.928 [651/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:08:56.928 [652/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:08:56.928 [653/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:08:56.928 [654/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:08:57.186 [655/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:08:57.186 [656/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:08:57.186 [657/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:08:57.443 [658/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:08:57.443 [659/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:08:57.443 [660/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:08:57.701 [661/705] Linking static target lib/librte_pipeline.a 00:08:57.701 [662/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:08:57.701 [663/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:08:57.701 [664/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:08:57.959 [665/705] Linking target app/dpdk-dumpcap 00:08:57.959 [666/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:08:57.959 [667/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:08:57.959 [668/705] Linking target app/dpdk-graph 00:08:57.959 [669/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:08:58.217 [670/705] Linking target app/dpdk-pdump 00:08:58.217 [671/705] Linking target app/dpdk-test-acl 00:08:58.217 [672/705] Linking target app/dpdk-proc-info 00:08:58.217 [673/705] Linking target app/dpdk-test-cmdline 00:08:58.217 [674/705] Linking target app/dpdk-test-compress-perf 00:08:58.475 [675/705] Linking target app/dpdk-test-bbdev 00:08:58.475 [676/705] Linking target app/dpdk-test-dma-perf 00:08:58.475 [677/705] Linking target app/dpdk-test-crypto-perf 00:08:58.733 [678/705] Linking target app/dpdk-test-fib 00:08:58.733 [679/705] Linking target app/dpdk-test-eventdev 00:08:58.733 [680/705] Linking target app/dpdk-test-flow-perf 00:08:58.733 [681/705] Linking target app/dpdk-test-gpudev 00:08:58.733 [682/705] Linking target app/dpdk-test-mldev 00:08:58.989 [683/705] Linking target app/dpdk-test-pipeline 00:08:58.990 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:08:58.990 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:08:58.990 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:08:58.990 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:08:59.247 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:08:59.247 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:08:59.247 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:08:59.505 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:08:59.505 [692/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:59.505 [693/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:08:59.505 [694/705] Linking target lib/librte_pipeline.so.24.0 00:08:59.505 [695/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:08:59.761 [696/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:08:59.761 [697/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:09:00.017 [698/705] Linking target app/dpdk-test-sad 00:09:00.017 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:09:00.017 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:09:00.017 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:09:00.017 [702/705] Linking target app/dpdk-test-regex 00:09:00.353 [703/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:09:00.353 [704/705] Linking target app/dpdk-testpmd 00:09:00.635 [705/705] Linking target app/dpdk-test-security-perf 00:09:00.635 12:40:00 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:09:00.635 12:40:00 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:09:00.635 12:40:00 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:09:00.635 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:09:00.635 [0/1] Installing files. 00:09:00.895 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.895 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.896 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.897 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.898 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.899 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.900 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.900 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.900 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.900 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:09:00.900 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:09:00.900 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:09:00.900 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:09:00.900 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.900 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.160 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.160 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.160 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.160 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:09:01.160 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.160 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:09:01.160 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.160 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:09:01.160 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.160 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:09:01.160 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.160 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.161 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.161 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.162 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.163 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:09:01.164 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:09:01.164 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:09:01.164 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:09:01.164 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:09:01.164 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:09:01.164 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:09:01.164 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:09:01.164 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:09:01.164 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:09:01.164 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:09:01.164 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:09:01.164 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:09:01.164 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:09:01.164 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:09:01.164 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:09:01.164 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:09:01.164 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:09:01.164 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:09:01.164 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:09:01.164 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:09:01.164 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:09:01.164 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:09:01.164 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:09:01.164 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:09:01.164 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:09:01.164 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:09:01.164 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:09:01.164 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:09:01.164 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:09:01.164 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:09:01.164 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:09:01.164 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:09:01.164 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:09:01.164 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:09:01.164 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:09:01.164 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:09:01.164 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:09:01.164 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:09:01.164 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:09:01.164 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:09:01.164 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:09:01.164 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:09:01.164 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:09:01.164 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:09:01.164 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:09:01.164 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:09:01.164 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:09:01.164 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:09:01.164 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:09:01.164 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:09:01.164 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:09:01.164 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:09:01.164 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:09:01.164 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:09:01.164 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:09:01.164 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:09:01.164 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:09:01.164 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:09:01.164 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:09:01.164 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:09:01.164 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:09:01.164 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:09:01.164 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:09:01.164 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:09:01.164 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:09:01.164 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:09:01.164 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:09:01.164 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:09:01.164 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:09:01.164 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:09:01.164 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:09:01.164 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:09:01.164 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:09:01.164 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:09:01.164 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:09:01.164 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:09:01.164 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:09:01.164 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:09:01.164 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:09:01.164 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:09:01.164 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:09:01.164 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:09:01.164 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:09:01.164 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:09:01.164 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:09:01.164 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:09:01.164 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:09:01.164 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:09:01.164 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:09:01.164 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:09:01.165 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:09:01.165 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:09:01.165 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:09:01.165 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:09:01.165 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:09:01.165 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:09:01.165 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:09:01.165 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:09:01.165 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:09:01.165 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:09:01.165 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:09:01.165 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:09:01.165 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:09:01.165 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:09:01.165 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:09:01.165 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:09:01.165 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:09:01.165 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:09:01.165 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:09:01.165 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:09:01.165 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:09:01.165 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:09:01.165 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:09:01.165 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:09:01.165 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:09:01.165 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:09:01.165 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:09:01.165 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:09:01.165 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:09:01.165 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:09:01.165 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:09:01.165 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:09:01.165 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:09:01.165 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:09:01.165 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:09:01.165 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:09:01.165 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:09:01.165 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:09:01.165 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:09:01.165 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:09:01.165 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:09:01.165 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:09:01.165 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:09:01.165 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:09:01.165 12:40:00 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:09:01.165 12:40:00 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:09:01.165 00:09:01.165 real 0m42.702s 00:09:01.165 user 5m1.568s 00:09:01.165 sys 0m48.035s 00:09:01.165 12:40:00 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:09:01.165 12:40:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:09:01.165 ************************************ 00:09:01.165 END TEST build_native_dpdk 00:09:01.165 ************************************ 00:09:01.422 12:40:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:09:01.422 12:40:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:09:01.422 12:40:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:09:01.422 12:40:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:09:01.422 12:40:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:09:01.422 12:40:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:09:01.422 12:40:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:09:01.422 12:40:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:09:01.422 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:09:01.422 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.422 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:09:01.422 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:01.680 Using 'verbs' RDMA provider 00:09:13.036 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:23.095 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:23.424 Creating mk/config.mk...done. 00:09:23.424 Creating mk/cc.flags.mk...done. 00:09:23.424 Type 'make' to build. 00:09:23.424 12:40:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:09:23.424 12:40:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:09:23.424 12:40:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:09:23.424 12:40:23 -- common/autotest_common.sh@10 -- $ set +x 00:09:23.424 ************************************ 00:09:23.424 START TEST make 00:09:23.424 ************************************ 00:09:23.424 12:40:23 make -- common/autotest_common.sh@1129 -- $ make -j10 00:09:23.685 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:09:23.685 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:09:23.685 meson setup builddir \ 00:09:23.685 -Dwith-libaio=enabled \ 00:09:23.685 -Dwith-liburing=enabled \ 00:09:23.685 -Dwith-libvfn=disabled \ 00:09:23.685 -Dwith-spdk=disabled \ 00:09:23.685 -Dexamples=false \ 00:09:23.685 -Dtests=false \ 00:09:23.685 -Dtools=false && \ 00:09:23.685 meson compile -C builddir && \ 00:09:23.685 cd -) 00:09:23.944 make[1]: Nothing to be done for 'all'. 00:09:25.841 The Meson build system 00:09:25.841 Version: 1.5.0 00:09:25.841 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:09:25.841 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:09:25.841 Build type: native build 00:09:25.841 Project name: xnvme 00:09:25.841 Project version: 0.7.5 00:09:25.841 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:09:25.841 C linker for the host machine: gcc ld.bfd 2.40-14 00:09:25.841 Host machine cpu family: x86_64 00:09:25.841 Host machine cpu: x86_64 00:09:25.841 Message: host_machine.system: linux 00:09:25.841 Compiler for C supports arguments -Wno-missing-braces: YES 00:09:25.841 Compiler for C supports arguments -Wno-cast-function-type: YES 00:09:25.841 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:09:25.841 Run-time dependency threads found: YES 00:09:25.841 Has header "setupapi.h" : NO 00:09:25.841 Has header "linux/blkzoned.h" : YES 00:09:25.841 Has header "linux/blkzoned.h" : YES (cached) 00:09:25.841 Has header "libaio.h" : YES 00:09:25.841 Library aio found: YES 00:09:25.841 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:09:25.841 Run-time dependency liburing found: YES 2.2 00:09:25.841 Dependency libvfn skipped: feature with-libvfn disabled 00:09:25.841 Found CMake: /usr/bin/cmake (3.27.7) 00:09:25.841 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:09:25.841 Subproject spdk : skipped: feature with-spdk disabled 00:09:25.841 Run-time dependency appleframeworks found: NO (tried framework) 00:09:25.841 Run-time dependency appleframeworks found: NO (tried framework) 00:09:25.841 Library rt found: YES 00:09:25.841 Checking for function "clock_gettime" with dependency -lrt: YES 00:09:25.841 Configuring xnvme_config.h using configuration 00:09:25.841 Configuring xnvme.spec using configuration 00:09:25.841 Run-time dependency bash-completion found: YES 2.11 00:09:25.841 Message: Bash-completions: /usr/share/bash-completion/completions 00:09:25.841 Program cp found: YES (/usr/bin/cp) 00:09:25.841 Build targets in project: 3 00:09:25.841 00:09:25.841 xnvme 0.7.5 00:09:25.841 00:09:25.841 Subprojects 00:09:25.841 spdk : NO Feature 'with-spdk' disabled 00:09:25.841 00:09:25.841 User defined options 00:09:25.841 examples : false 00:09:25.841 tests : false 00:09:25.841 tools : false 00:09:25.841 with-libaio : enabled 00:09:25.841 with-liburing: enabled 00:09:25.841 with-libvfn : disabled 00:09:25.841 with-spdk : disabled 00:09:25.841 00:09:25.841 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:26.098 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:09:26.098 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:09:26.355 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:09:26.355 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:09:26.355 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:09:26.355 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:09:26.355 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:09:26.355 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:09:26.355 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:09:26.355 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:09:26.355 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:09:26.355 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:09:26.355 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:09:26.355 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:09:26.355 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:09:26.355 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:09:26.355 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:09:26.355 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:09:26.355 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:09:26.613 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:09:26.613 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:09:26.613 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:09:26.613 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:09:26.613 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:09:26.613 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:09:26.613 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:09:26.613 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:09:26.613 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:09:26.613 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:09:26.613 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:09:26.613 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:09:26.613 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:09:26.613 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:09:26.613 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:09:26.613 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:09:26.613 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:09:26.613 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:09:26.613 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:09:26.613 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:09:26.613 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:09:26.613 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:09:26.613 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:09:26.613 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:09:26.613 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:09:26.613 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:09:26.613 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:09:26.613 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:09:26.613 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:09:26.613 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:09:26.613 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:09:26.613 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:09:26.613 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:09:26.613 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:09:26.613 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:09:26.962 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:09:26.962 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:09:26.962 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:09:26.962 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:09:26.962 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:09:26.962 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:09:26.962 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:09:26.962 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:09:26.962 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:09:26.962 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:09:26.962 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:09:26.962 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:09:26.962 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:09:26.962 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:09:26.962 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:09:26.962 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:09:26.962 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:09:26.962 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:09:26.962 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:09:26.962 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:09:27.233 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:09:27.233 [75/76] Linking static target lib/libxnvme.a 00:09:27.491 [76/76] Linking target lib/libxnvme.so.0.7.5 00:09:27.491 INFO: autodetecting backend as ninja 00:09:27.491 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:09:27.491 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:10:06.182 CC lib/ut_mock/mock.o 00:10:06.182 CC lib/ut/ut.o 00:10:06.182 CC lib/log/log_flags.o 00:10:06.182 CC lib/log/log.o 00:10:06.182 CC lib/log/log_deprecated.o 00:10:06.182 LIB libspdk_ut_mock.a 00:10:06.182 LIB libspdk_ut.a 00:10:06.182 LIB libspdk_log.a 00:10:06.182 SO libspdk_ut.so.2.0 00:10:06.182 SO libspdk_ut_mock.so.6.0 00:10:06.182 SO libspdk_log.so.7.1 00:10:06.182 SYMLINK libspdk_ut_mock.so 00:10:06.182 SYMLINK libspdk_ut.so 00:10:06.182 SYMLINK libspdk_log.so 00:10:06.182 CXX lib/trace_parser/trace.o 00:10:06.182 CC lib/dma/dma.o 00:10:06.182 CC lib/util/base64.o 00:10:06.182 CC lib/util/bit_array.o 00:10:06.182 CC lib/util/crc16.o 00:10:06.182 CC lib/util/cpuset.o 00:10:06.182 CC lib/util/crc32.o 00:10:06.182 CC lib/util/crc32c.o 00:10:06.182 CC lib/ioat/ioat.o 00:10:06.182 CC lib/vfio_user/host/vfio_user_pci.o 00:10:06.182 CC lib/util/crc32_ieee.o 00:10:06.182 CC lib/vfio_user/host/vfio_user.o 00:10:06.182 CC lib/util/crc64.o 00:10:06.182 CC lib/util/dif.o 00:10:06.182 LIB libspdk_dma.a 00:10:06.182 CC lib/util/fd.o 00:10:06.182 CC lib/util/fd_group.o 00:10:06.182 SO libspdk_dma.so.5.0 00:10:06.182 CC lib/util/file.o 00:10:06.182 CC lib/util/hexlify.o 00:10:06.182 SYMLINK libspdk_dma.so 00:10:06.182 CC lib/util/iov.o 00:10:06.182 LIB libspdk_ioat.a 00:10:06.182 SO libspdk_ioat.so.7.0 00:10:06.182 CC lib/util/math.o 00:10:06.182 CC lib/util/net.o 00:10:06.182 LIB libspdk_vfio_user.a 00:10:06.182 SYMLINK libspdk_ioat.so 00:10:06.182 SO libspdk_vfio_user.so.5.0 00:10:06.182 CC lib/util/pipe.o 00:10:06.182 CC lib/util/strerror_tls.o 00:10:06.182 CC lib/util/string.o 00:10:06.182 SYMLINK libspdk_vfio_user.so 00:10:06.182 CC lib/util/uuid.o 00:10:06.182 CC lib/util/xor.o 00:10:06.182 CC lib/util/zipf.o 00:10:06.182 CC lib/util/md5.o 00:10:06.182 LIB libspdk_util.a 00:10:06.182 SO libspdk_util.so.10.1 00:10:06.182 SYMLINK libspdk_util.so 00:10:06.182 LIB libspdk_trace_parser.a 00:10:06.182 SO libspdk_trace_parser.so.6.0 00:10:06.182 SYMLINK libspdk_trace_parser.so 00:10:06.182 CC lib/vmd/vmd.o 00:10:06.182 CC lib/vmd/led.o 00:10:06.182 CC lib/conf/conf.o 00:10:06.182 CC lib/rdma_utils/rdma_utils.o 00:10:06.182 CC lib/env_dpdk/env.o 00:10:06.182 CC lib/json/json_parse.o 00:10:06.182 CC lib/env_dpdk/memory.o 00:10:06.182 CC lib/json/json_util.o 00:10:06.182 CC lib/env_dpdk/pci.o 00:10:06.182 CC lib/idxd/idxd.o 00:10:06.182 CC lib/idxd/idxd_user.o 00:10:06.182 LIB libspdk_conf.a 00:10:06.182 CC lib/idxd/idxd_kernel.o 00:10:06.182 SO libspdk_conf.so.6.0 00:10:06.182 CC lib/json/json_write.o 00:10:06.182 LIB libspdk_rdma_utils.a 00:10:06.182 SO libspdk_rdma_utils.so.1.0 00:10:06.182 SYMLINK libspdk_conf.so 00:10:06.182 CC lib/env_dpdk/init.o 00:10:06.182 CC lib/env_dpdk/threads.o 00:10:06.182 SYMLINK libspdk_rdma_utils.so 00:10:06.182 CC lib/env_dpdk/pci_ioat.o 00:10:06.182 CC lib/env_dpdk/pci_virtio.o 00:10:06.182 CC lib/env_dpdk/pci_vmd.o 00:10:06.182 CC lib/env_dpdk/pci_idxd.o 00:10:06.182 CC lib/env_dpdk/pci_event.o 00:10:06.182 CC lib/env_dpdk/sigbus_handler.o 00:10:06.182 CC lib/env_dpdk/pci_dpdk.o 00:10:06.182 LIB libspdk_idxd.a 00:10:06.182 LIB libspdk_json.a 00:10:06.182 CC lib/env_dpdk/pci_dpdk_2207.o 00:10:06.182 SO libspdk_idxd.so.12.1 00:10:06.182 SO libspdk_json.so.6.0 00:10:06.182 SYMLINK libspdk_json.so 00:10:06.182 SYMLINK libspdk_idxd.so 00:10:06.182 CC lib/env_dpdk/pci_dpdk_2211.o 00:10:06.182 LIB libspdk_vmd.a 00:10:06.182 SO libspdk_vmd.so.6.0 00:10:06.182 CC lib/rdma_provider/rdma_provider_verbs.o 00:10:06.182 CC lib/rdma_provider/common.o 00:10:06.182 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:10:06.182 CC lib/jsonrpc/jsonrpc_server.o 00:10:06.182 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:10:06.182 CC lib/jsonrpc/jsonrpc_client.o 00:10:06.182 SYMLINK libspdk_vmd.so 00:10:06.182 LIB libspdk_rdma_provider.a 00:10:06.182 LIB libspdk_jsonrpc.a 00:10:06.182 SO libspdk_rdma_provider.so.7.0 00:10:06.182 SO libspdk_jsonrpc.so.6.0 00:10:06.182 SYMLINK libspdk_rdma_provider.so 00:10:06.182 SYMLINK libspdk_jsonrpc.so 00:10:06.440 CC lib/rpc/rpc.o 00:10:06.697 LIB libspdk_env_dpdk.a 00:10:06.697 SO libspdk_env_dpdk.so.15.1 00:10:06.697 LIB libspdk_rpc.a 00:10:06.697 SO libspdk_rpc.so.6.0 00:10:06.697 SYMLINK libspdk_rpc.so 00:10:06.955 SYMLINK libspdk_env_dpdk.so 00:10:06.955 CC lib/keyring/keyring_rpc.o 00:10:06.955 CC lib/keyring/keyring.o 00:10:06.955 CC lib/trace/trace.o 00:10:06.955 CC lib/trace/trace_flags.o 00:10:06.955 CC lib/notify/notify_rpc.o 00:10:06.955 CC lib/notify/notify.o 00:10:06.955 CC lib/trace/trace_rpc.o 00:10:07.211 LIB libspdk_notify.a 00:10:07.211 LIB libspdk_keyring.a 00:10:07.211 SO libspdk_notify.so.6.0 00:10:07.211 SO libspdk_keyring.so.2.0 00:10:07.211 SYMLINK libspdk_notify.so 00:10:07.211 LIB libspdk_trace.a 00:10:07.211 SYMLINK libspdk_keyring.so 00:10:07.211 SO libspdk_trace.so.11.0 00:10:07.211 SYMLINK libspdk_trace.so 00:10:07.470 CC lib/thread/thread.o 00:10:07.470 CC lib/thread/iobuf.o 00:10:07.470 CC lib/sock/sock.o 00:10:07.470 CC lib/sock/sock_rpc.o 00:10:08.035 LIB libspdk_sock.a 00:10:08.035 SO libspdk_sock.so.10.0 00:10:08.035 SYMLINK libspdk_sock.so 00:10:08.293 CC lib/nvme/nvme_ctrlr_cmd.o 00:10:08.293 CC lib/nvme/nvme_ns_cmd.o 00:10:08.293 CC lib/nvme/nvme_fabric.o 00:10:08.293 CC lib/nvme/nvme_ctrlr.o 00:10:08.293 CC lib/nvme/nvme_ns.o 00:10:08.293 CC lib/nvme/nvme_pcie_common.o 00:10:08.293 CC lib/nvme/nvme.o 00:10:08.293 CC lib/nvme/nvme_pcie.o 00:10:08.293 CC lib/nvme/nvme_qpair.o 00:10:08.856 LIB libspdk_thread.a 00:10:08.856 SO libspdk_thread.so.11.0 00:10:08.856 SYMLINK libspdk_thread.so 00:10:08.856 CC lib/nvme/nvme_quirks.o 00:10:08.856 CC lib/nvme/nvme_transport.o 00:10:08.856 CC lib/nvme/nvme_discovery.o 00:10:09.113 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:09.113 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:09.113 CC lib/nvme/nvme_tcp.o 00:10:09.113 CC lib/accel/accel.o 00:10:09.113 CC lib/nvme/nvme_opal.o 00:10:09.370 CC lib/nvme/nvme_io_msg.o 00:10:09.370 CC lib/accel/accel_rpc.o 00:10:09.628 CC lib/accel/accel_sw.o 00:10:09.628 CC lib/nvme/nvme_poll_group.o 00:10:09.628 CC lib/nvme/nvme_zns.o 00:10:09.628 CC lib/nvme/nvme_stubs.o 00:10:09.628 CC lib/nvme/nvme_auth.o 00:10:09.885 CC lib/nvme/nvme_cuse.o 00:10:09.885 CC lib/nvme/nvme_rdma.o 00:10:10.144 CC lib/blob/blobstore.o 00:10:10.144 CC lib/blob/request.o 00:10:10.144 CC lib/blob/zeroes.o 00:10:10.144 CC lib/init/json_config.o 00:10:10.402 CC lib/blob/blob_bs_dev.o 00:10:10.402 LIB libspdk_accel.a 00:10:10.402 CC lib/init/subsystem.o 00:10:10.402 SO libspdk_accel.so.16.0 00:10:10.402 SYMLINK libspdk_accel.so 00:10:10.402 CC lib/init/subsystem_rpc.o 00:10:10.402 CC lib/init/rpc.o 00:10:10.659 CC lib/virtio/virtio.o 00:10:10.659 CC lib/virtio/virtio_vhost_user.o 00:10:10.659 CC lib/virtio/virtio_vfio_user.o 00:10:10.659 CC lib/virtio/virtio_pci.o 00:10:10.659 LIB libspdk_init.a 00:10:10.659 CC lib/fsdev/fsdev.o 00:10:10.659 CC lib/fsdev/fsdev_io.o 00:10:10.659 SO libspdk_init.so.6.0 00:10:10.659 SYMLINK libspdk_init.so 00:10:10.659 CC lib/fsdev/fsdev_rpc.o 00:10:10.659 CC lib/bdev/bdev.o 00:10:10.917 CC lib/bdev/bdev_rpc.o 00:10:10.917 CC lib/bdev/bdev_zone.o 00:10:10.917 CC lib/bdev/part.o 00:10:10.917 LIB libspdk_virtio.a 00:10:10.917 CC lib/event/app.o 00:10:10.917 SO libspdk_virtio.so.7.0 00:10:10.917 SYMLINK libspdk_virtio.so 00:10:10.917 CC lib/bdev/scsi_nvme.o 00:10:10.917 CC lib/event/reactor.o 00:10:11.174 CC lib/event/log_rpc.o 00:10:11.174 CC lib/event/app_rpc.o 00:10:11.174 CC lib/event/scheduler_static.o 00:10:11.174 LIB libspdk_nvme.a 00:10:11.431 LIB libspdk_fsdev.a 00:10:11.431 SO libspdk_nvme.so.15.0 00:10:11.431 SO libspdk_fsdev.so.2.0 00:10:11.431 SYMLINK libspdk_fsdev.so 00:10:11.431 LIB libspdk_event.a 00:10:11.431 SO libspdk_event.so.14.0 00:10:11.687 SYMLINK libspdk_event.so 00:10:11.687 SYMLINK libspdk_nvme.so 00:10:11.687 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:10:12.253 LIB libspdk_fuse_dispatcher.a 00:10:12.253 SO libspdk_fuse_dispatcher.so.1.0 00:10:12.510 SYMLINK libspdk_fuse_dispatcher.so 00:10:13.076 LIB libspdk_bdev.a 00:10:13.334 SO libspdk_bdev.so.17.0 00:10:13.334 LIB libspdk_blob.a 00:10:13.334 SYMLINK libspdk_bdev.so 00:10:13.334 SO libspdk_blob.so.12.0 00:10:13.334 SYMLINK libspdk_blob.so 00:10:13.592 CC lib/ublk/ublk_rpc.o 00:10:13.592 CC lib/ublk/ublk.o 00:10:13.592 CC lib/nvmf/ctrlr.o 00:10:13.592 CC lib/nbd/nbd.o 00:10:13.592 CC lib/scsi/dev.o 00:10:13.592 CC lib/nvmf/ctrlr_discovery.o 00:10:13.592 CC lib/scsi/lun.o 00:10:13.592 CC lib/ftl/ftl_core.o 00:10:13.592 CC lib/blobfs/blobfs.o 00:10:13.592 CC lib/lvol/lvol.o 00:10:13.592 CC lib/blobfs/tree.o 00:10:13.592 CC lib/nvmf/ctrlr_bdev.o 00:10:13.849 CC lib/scsi/port.o 00:10:13.849 CC lib/ftl/ftl_init.o 00:10:13.849 CC lib/ftl/ftl_layout.o 00:10:13.849 CC lib/ftl/ftl_debug.o 00:10:13.849 CC lib/scsi/scsi.o 00:10:13.849 CC lib/nbd/nbd_rpc.o 00:10:13.849 CC lib/scsi/scsi_bdev.o 00:10:14.107 LIB libspdk_ublk.a 00:10:14.107 SO libspdk_ublk.so.3.0 00:10:14.108 CC lib/nvmf/subsystem.o 00:10:14.108 SYMLINK libspdk_ublk.so 00:10:14.108 CC lib/scsi/scsi_pr.o 00:10:14.108 CC lib/scsi/scsi_rpc.o 00:10:14.108 LIB libspdk_nbd.a 00:10:14.108 CC lib/ftl/ftl_io.o 00:10:14.108 SO libspdk_nbd.so.7.0 00:10:14.108 SYMLINK libspdk_nbd.so 00:10:14.108 CC lib/scsi/task.o 00:10:14.108 CC lib/nvmf/nvmf.o 00:10:14.365 CC lib/ftl/ftl_sb.o 00:10:14.365 CC lib/ftl/ftl_l2p.o 00:10:14.365 CC lib/ftl/ftl_l2p_flat.o 00:10:14.365 LIB libspdk_scsi.a 00:10:14.365 CC lib/nvmf/nvmf_rpc.o 00:10:14.365 SO libspdk_scsi.so.9.0 00:10:14.365 LIB libspdk_blobfs.a 00:10:14.365 CC lib/ftl/ftl_nv_cache.o 00:10:14.365 SO libspdk_blobfs.so.11.0 00:10:14.365 SYMLINK libspdk_scsi.so 00:10:14.365 CC lib/nvmf/transport.o 00:10:14.365 CC lib/nvmf/tcp.o 00:10:14.631 CC lib/ftl/ftl_band.o 00:10:14.631 SYMLINK libspdk_blobfs.so 00:10:14.631 LIB libspdk_lvol.a 00:10:14.631 SO libspdk_lvol.so.11.0 00:10:14.631 SYMLINK libspdk_lvol.so 00:10:14.631 CC lib/nvmf/stubs.o 00:10:14.631 CC lib/iscsi/conn.o 00:10:14.888 CC lib/iscsi/init_grp.o 00:10:14.888 CC lib/ftl/ftl_band_ops.o 00:10:15.145 CC lib/ftl/ftl_writer.o 00:10:15.145 CC lib/ftl/ftl_rq.o 00:10:15.145 CC lib/ftl/ftl_reloc.o 00:10:15.145 CC lib/ftl/ftl_l2p_cache.o 00:10:15.145 CC lib/nvmf/mdns_server.o 00:10:15.145 CC lib/nvmf/rdma.o 00:10:15.403 CC lib/nvmf/auth.o 00:10:15.403 CC lib/iscsi/iscsi.o 00:10:15.403 CC lib/ftl/ftl_p2l.o 00:10:15.403 CC lib/ftl/ftl_p2l_log.o 00:10:15.403 CC lib/ftl/mngt/ftl_mngt.o 00:10:15.403 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:15.661 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:15.661 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:15.661 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:15.661 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:15.661 CC lib/iscsi/param.o 00:10:15.661 CC lib/iscsi/portal_grp.o 00:10:15.918 CC lib/vhost/vhost.o 00:10:15.918 CC lib/vhost/vhost_rpc.o 00:10:15.918 CC lib/vhost/vhost_scsi.o 00:10:15.918 CC lib/vhost/vhost_blk.o 00:10:15.918 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:15.918 CC lib/iscsi/tgt_node.o 00:10:15.918 CC lib/iscsi/iscsi_subsystem.o 00:10:16.176 CC lib/iscsi/iscsi_rpc.o 00:10:16.176 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:16.434 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:16.434 CC lib/iscsi/task.o 00:10:16.434 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:16.434 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:16.434 CC lib/vhost/rte_vhost_user.o 00:10:16.434 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:16.434 LIB libspdk_iscsi.a 00:10:16.692 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:16.692 CC lib/ftl/utils/ftl_conf.o 00:10:16.692 SO libspdk_iscsi.so.8.0 00:10:16.692 CC lib/ftl/utils/ftl_md.o 00:10:16.692 CC lib/ftl/utils/ftl_mempool.o 00:10:16.692 CC lib/ftl/utils/ftl_bitmap.o 00:10:16.692 CC lib/ftl/utils/ftl_property.o 00:10:16.692 SYMLINK libspdk_iscsi.so 00:10:16.692 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:16.692 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:16.692 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:16.949 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:16.949 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:16.949 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:16.949 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:10:16.949 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:16.949 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:16.949 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:16.949 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:16.949 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:10:16.949 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:10:16.949 CC lib/ftl/base/ftl_base_dev.o 00:10:17.206 CC lib/ftl/base/ftl_base_bdev.o 00:10:17.206 CC lib/ftl/ftl_trace.o 00:10:17.464 LIB libspdk_ftl.a 00:10:17.464 LIB libspdk_vhost.a 00:10:17.464 SO libspdk_vhost.so.8.0 00:10:17.722 SO libspdk_ftl.so.9.0 00:10:17.722 SYMLINK libspdk_vhost.so 00:10:17.722 LIB libspdk_nvmf.a 00:10:17.722 SYMLINK libspdk_ftl.so 00:10:17.979 SO libspdk_nvmf.so.20.0 00:10:17.979 SYMLINK libspdk_nvmf.so 00:10:18.544 CC module/env_dpdk/env_dpdk_rpc.o 00:10:18.544 CC module/accel/error/accel_error.o 00:10:18.544 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:18.544 CC module/blob/bdev/blob_bdev.o 00:10:18.544 CC module/scheduler/gscheduler/gscheduler.o 00:10:18.544 CC module/sock/posix/posix.o 00:10:18.544 CC module/keyring/linux/keyring.o 00:10:18.544 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:18.544 CC module/keyring/file/keyring.o 00:10:18.544 CC module/fsdev/aio/fsdev_aio.o 00:10:18.544 LIB libspdk_env_dpdk_rpc.a 00:10:18.544 SO libspdk_env_dpdk_rpc.so.6.0 00:10:18.544 SYMLINK libspdk_env_dpdk_rpc.so 00:10:18.544 CC module/fsdev/aio/fsdev_aio_rpc.o 00:10:18.544 LIB libspdk_scheduler_dpdk_governor.a 00:10:18.544 CC module/keyring/linux/keyring_rpc.o 00:10:18.544 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:18.544 CC module/keyring/file/keyring_rpc.o 00:10:18.544 LIB libspdk_scheduler_gscheduler.a 00:10:18.544 SO libspdk_scheduler_gscheduler.so.4.0 00:10:18.544 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:18.544 CC module/accel/error/accel_error_rpc.o 00:10:18.544 SYMLINK libspdk_scheduler_gscheduler.so 00:10:18.802 LIB libspdk_keyring_linux.a 00:10:18.802 CC module/fsdev/aio/linux_aio_mgr.o 00:10:18.802 LIB libspdk_scheduler_dynamic.a 00:10:18.802 SO libspdk_keyring_linux.so.1.0 00:10:18.802 LIB libspdk_blob_bdev.a 00:10:18.802 SO libspdk_scheduler_dynamic.so.4.0 00:10:18.802 LIB libspdk_keyring_file.a 00:10:18.802 SO libspdk_blob_bdev.so.12.0 00:10:18.802 LIB libspdk_accel_error.a 00:10:18.802 SO libspdk_keyring_file.so.2.0 00:10:18.802 SYMLINK libspdk_keyring_linux.so 00:10:18.802 SO libspdk_accel_error.so.2.0 00:10:18.802 CC module/accel/ioat/accel_ioat.o 00:10:18.802 SYMLINK libspdk_scheduler_dynamic.so 00:10:18.802 SYMLINK libspdk_blob_bdev.so 00:10:18.802 SYMLINK libspdk_accel_error.so 00:10:18.802 CC module/accel/ioat/accel_ioat_rpc.o 00:10:18.802 SYMLINK libspdk_keyring_file.so 00:10:18.802 CC module/accel/dsa/accel_dsa.o 00:10:18.802 CC module/accel/dsa/accel_dsa_rpc.o 00:10:19.059 LIB libspdk_accel_ioat.a 00:10:19.059 CC module/accel/iaa/accel_iaa.o 00:10:19.059 SO libspdk_accel_ioat.so.6.0 00:10:19.059 CC module/bdev/delay/vbdev_delay.o 00:10:19.059 SYMLINK libspdk_accel_ioat.so 00:10:19.059 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:19.059 CC module/blobfs/bdev/blobfs_bdev.o 00:10:19.059 CC module/bdev/error/vbdev_error.o 00:10:19.059 CC module/bdev/gpt/gpt.o 00:10:19.059 CC module/bdev/lvol/vbdev_lvol.o 00:10:19.059 LIB libspdk_accel_dsa.a 00:10:19.059 CC module/accel/iaa/accel_iaa_rpc.o 00:10:19.059 SO libspdk_accel_dsa.so.5.0 00:10:19.317 LIB libspdk_fsdev_aio.a 00:10:19.317 SO libspdk_fsdev_aio.so.1.0 00:10:19.317 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:19.317 SYMLINK libspdk_accel_dsa.so 00:10:19.317 LIB libspdk_sock_posix.a 00:10:19.317 LIB libspdk_accel_iaa.a 00:10:19.317 CC module/bdev/gpt/vbdev_gpt.o 00:10:19.317 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:19.317 SO libspdk_accel_iaa.so.3.0 00:10:19.317 SO libspdk_sock_posix.so.6.0 00:10:19.317 SYMLINK libspdk_fsdev_aio.so 00:10:19.317 SYMLINK libspdk_accel_iaa.so 00:10:19.317 SYMLINK libspdk_sock_posix.so 00:10:19.317 LIB libspdk_blobfs_bdev.a 00:10:19.317 CC module/bdev/error/vbdev_error_rpc.o 00:10:19.317 SO libspdk_blobfs_bdev.so.6.0 00:10:19.317 CC module/bdev/nvme/bdev_nvme.o 00:10:19.317 CC module/bdev/null/bdev_null.o 00:10:19.317 CC module/bdev/malloc/bdev_malloc.o 00:10:19.575 SYMLINK libspdk_blobfs_bdev.so 00:10:19.575 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:19.575 LIB libspdk_bdev_error.a 00:10:19.575 CC module/bdev/passthru/vbdev_passthru.o 00:10:19.575 LIB libspdk_bdev_gpt.a 00:10:19.575 SO libspdk_bdev_error.so.6.0 00:10:19.575 LIB libspdk_bdev_delay.a 00:10:19.575 SO libspdk_bdev_delay.so.6.0 00:10:19.575 SO libspdk_bdev_gpt.so.6.0 00:10:19.575 SYMLINK libspdk_bdev_error.so 00:10:19.575 SYMLINK libspdk_bdev_delay.so 00:10:19.575 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:19.575 SYMLINK libspdk_bdev_gpt.so 00:10:19.575 LIB libspdk_bdev_lvol.a 00:10:19.575 SO libspdk_bdev_lvol.so.6.0 00:10:19.575 CC module/bdev/null/bdev_null_rpc.o 00:10:19.833 CC module/bdev/raid/bdev_raid.o 00:10:19.833 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:19.833 SYMLINK libspdk_bdev_lvol.so 00:10:19.833 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:19.833 CC module/bdev/nvme/nvme_rpc.o 00:10:19.833 CC module/bdev/split/vbdev_split.o 00:10:19.833 LIB libspdk_bdev_passthru.a 00:10:19.833 LIB libspdk_bdev_null.a 00:10:19.833 SO libspdk_bdev_passthru.so.6.0 00:10:19.833 SO libspdk_bdev_null.so.6.0 00:10:19.833 SYMLINK libspdk_bdev_passthru.so 00:10:19.833 CC module/bdev/nvme/bdev_mdns_client.o 00:10:19.833 SYMLINK libspdk_bdev_null.so 00:10:19.833 CC module/bdev/xnvme/bdev_xnvme.o 00:10:19.833 CC module/bdev/nvme/vbdev_opal.o 00:10:19.833 LIB libspdk_bdev_malloc.a 00:10:19.833 SO libspdk_bdev_malloc.so.6.0 00:10:20.206 CC module/bdev/split/vbdev_split_rpc.o 00:10:20.206 SYMLINK libspdk_bdev_malloc.so 00:10:20.206 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:10:20.206 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:20.206 CC module/bdev/aio/bdev_aio.o 00:10:20.206 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:20.206 CC module/bdev/ftl/bdev_ftl.o 00:10:20.206 LIB libspdk_bdev_split.a 00:10:20.206 SO libspdk_bdev_split.so.6.0 00:10:20.206 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:20.206 CC module/bdev/aio/bdev_aio_rpc.o 00:10:20.464 LIB libspdk_bdev_xnvme.a 00:10:20.464 SYMLINK libspdk_bdev_split.so 00:10:20.464 LIB libspdk_bdev_zone_block.a 00:10:20.464 SO libspdk_bdev_xnvme.so.3.0 00:10:20.464 SO libspdk_bdev_zone_block.so.6.0 00:10:20.464 SYMLINK libspdk_bdev_xnvme.so 00:10:20.464 SYMLINK libspdk_bdev_zone_block.so 00:10:20.464 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:20.464 CC module/bdev/raid/bdev_raid_rpc.o 00:10:20.464 CC module/bdev/raid/bdev_raid_sb.o 00:10:20.464 CC module/bdev/raid/raid0.o 00:10:20.464 CC module/bdev/iscsi/bdev_iscsi.o 00:10:20.464 LIB libspdk_bdev_ftl.a 00:10:20.464 SO libspdk_bdev_ftl.so.6.0 00:10:20.464 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:20.464 LIB libspdk_bdev_aio.a 00:10:20.464 SYMLINK libspdk_bdev_ftl.so 00:10:20.464 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:20.464 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:20.464 CC module/bdev/raid/raid1.o 00:10:20.464 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:20.723 CC module/bdev/raid/concat.o 00:10:20.723 LIB libspdk_bdev_iscsi.a 00:10:20.723 LIB libspdk_bdev_raid.a 00:10:20.981 LIB libspdk_bdev_virtio.a 00:10:21.546 LIB libspdk_bdev_nvme.a 00:10:21.804 SO libspdk_bdev_iscsi.so.6.0 00:10:21.804 SO libspdk_bdev_virtio.so.6.0 00:10:21.804 SO libspdk_bdev_aio.so.6.0 00:10:21.804 SO libspdk_bdev_raid.so.6.0 00:10:21.804 SO libspdk_bdev_nvme.so.7.1 00:10:21.804 SYMLINK libspdk_bdev_iscsi.so 00:10:21.804 SYMLINK libspdk_bdev_aio.so 00:10:21.804 SYMLINK libspdk_bdev_virtio.so 00:10:21.804 SYMLINK libspdk_bdev_raid.so 00:10:21.804 SYMLINK libspdk_bdev_nvme.so 00:10:22.368 CC module/event/subsystems/keyring/keyring.o 00:10:22.368 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:22.368 CC module/event/subsystems/sock/sock.o 00:10:22.368 CC module/event/subsystems/fsdev/fsdev.o 00:10:22.368 CC module/event/subsystems/vmd/vmd.o 00:10:22.368 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:22.368 CC module/event/subsystems/iobuf/iobuf.o 00:10:22.368 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:22.368 CC module/event/subsystems/scheduler/scheduler.o 00:10:22.368 LIB libspdk_event_vhost_blk.a 00:10:22.368 LIB libspdk_event_sock.a 00:10:22.368 LIB libspdk_event_scheduler.a 00:10:22.368 LIB libspdk_event_keyring.a 00:10:22.368 LIB libspdk_event_vmd.a 00:10:22.368 LIB libspdk_event_fsdev.a 00:10:22.368 SO libspdk_event_vhost_blk.so.3.0 00:10:22.368 SO libspdk_event_scheduler.so.4.0 00:10:22.368 SO libspdk_event_keyring.so.1.0 00:10:22.368 SO libspdk_event_sock.so.5.0 00:10:22.368 SO libspdk_event_vmd.so.6.0 00:10:22.368 SO libspdk_event_fsdev.so.1.0 00:10:22.368 LIB libspdk_event_iobuf.a 00:10:22.368 SYMLINK libspdk_event_vhost_blk.so 00:10:22.368 SO libspdk_event_iobuf.so.3.0 00:10:22.368 SYMLINK libspdk_event_sock.so 00:10:22.368 SYMLINK libspdk_event_scheduler.so 00:10:22.368 SYMLINK libspdk_event_keyring.so 00:10:22.368 SYMLINK libspdk_event_fsdev.so 00:10:22.368 SYMLINK libspdk_event_vmd.so 00:10:22.368 SYMLINK libspdk_event_iobuf.so 00:10:22.625 CC module/event/subsystems/accel/accel.o 00:10:22.882 LIB libspdk_event_accel.a 00:10:22.882 SO libspdk_event_accel.so.6.0 00:10:22.882 SYMLINK libspdk_event_accel.so 00:10:23.139 CC module/event/subsystems/bdev/bdev.o 00:10:23.397 LIB libspdk_event_bdev.a 00:10:23.397 SO libspdk_event_bdev.so.6.0 00:10:23.397 SYMLINK libspdk_event_bdev.so 00:10:23.655 CC module/event/subsystems/ublk/ublk.o 00:10:23.655 CC module/event/subsystems/nbd/nbd.o 00:10:23.655 CC module/event/subsystems/scsi/scsi.o 00:10:23.655 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:23.655 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:23.655 LIB libspdk_event_nbd.a 00:10:23.655 LIB libspdk_event_ublk.a 00:10:23.655 LIB libspdk_event_scsi.a 00:10:23.655 SO libspdk_event_ublk.so.3.0 00:10:23.655 SO libspdk_event_nbd.so.6.0 00:10:23.655 SO libspdk_event_scsi.so.6.0 00:10:23.913 SYMLINK libspdk_event_ublk.so 00:10:23.913 SYMLINK libspdk_event_nbd.so 00:10:23.913 LIB libspdk_event_nvmf.a 00:10:23.913 SYMLINK libspdk_event_scsi.so 00:10:23.913 SO libspdk_event_nvmf.so.6.0 00:10:23.913 SYMLINK libspdk_event_nvmf.so 00:10:23.913 CC module/event/subsystems/iscsi/iscsi.o 00:10:23.913 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:24.169 LIB libspdk_event_iscsi.a 00:10:24.169 LIB libspdk_event_vhost_scsi.a 00:10:24.169 SO libspdk_event_iscsi.so.6.0 00:10:24.169 SO libspdk_event_vhost_scsi.so.3.0 00:10:24.169 SYMLINK libspdk_event_iscsi.so 00:10:24.169 SYMLINK libspdk_event_vhost_scsi.so 00:10:24.427 SO libspdk.so.6.0 00:10:24.427 SYMLINK libspdk.so 00:10:24.427 TEST_HEADER include/spdk/accel.h 00:10:24.427 CXX app/trace/trace.o 00:10:24.427 TEST_HEADER include/spdk/accel_module.h 00:10:24.427 TEST_HEADER include/spdk/assert.h 00:10:24.427 TEST_HEADER include/spdk/barrier.h 00:10:24.427 TEST_HEADER include/spdk/base64.h 00:10:24.427 TEST_HEADER include/spdk/bdev.h 00:10:24.427 CC app/trace_record/trace_record.o 00:10:24.427 TEST_HEADER include/spdk/bdev_module.h 00:10:24.427 TEST_HEADER include/spdk/bdev_zone.h 00:10:24.427 TEST_HEADER include/spdk/bit_array.h 00:10:24.427 TEST_HEADER include/spdk/bit_pool.h 00:10:24.427 TEST_HEADER include/spdk/blob_bdev.h 00:10:24.684 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:24.684 TEST_HEADER include/spdk/blobfs.h 00:10:24.684 TEST_HEADER include/spdk/blob.h 00:10:24.684 TEST_HEADER include/spdk/conf.h 00:10:24.684 TEST_HEADER include/spdk/config.h 00:10:24.684 TEST_HEADER include/spdk/cpuset.h 00:10:24.684 TEST_HEADER include/spdk/crc16.h 00:10:24.684 TEST_HEADER include/spdk/crc32.h 00:10:24.684 TEST_HEADER include/spdk/crc64.h 00:10:24.684 TEST_HEADER include/spdk/dif.h 00:10:24.684 CC app/iscsi_tgt/iscsi_tgt.o 00:10:24.684 TEST_HEADER include/spdk/dma.h 00:10:24.684 TEST_HEADER include/spdk/endian.h 00:10:24.684 TEST_HEADER include/spdk/env_dpdk.h 00:10:24.684 TEST_HEADER include/spdk/env.h 00:10:24.684 TEST_HEADER include/spdk/event.h 00:10:24.684 TEST_HEADER include/spdk/fd_group.h 00:10:24.684 TEST_HEADER include/spdk/fd.h 00:10:24.684 TEST_HEADER include/spdk/file.h 00:10:24.684 TEST_HEADER include/spdk/fsdev.h 00:10:24.684 TEST_HEADER include/spdk/fsdev_module.h 00:10:24.684 CC app/nvmf_tgt/nvmf_main.o 00:10:24.684 CC app/spdk_tgt/spdk_tgt.o 00:10:24.684 TEST_HEADER include/spdk/ftl.h 00:10:24.684 TEST_HEADER include/spdk/fuse_dispatcher.h 00:10:24.684 TEST_HEADER include/spdk/gpt_spec.h 00:10:24.684 TEST_HEADER include/spdk/hexlify.h 00:10:24.684 TEST_HEADER include/spdk/histogram_data.h 00:10:24.684 TEST_HEADER include/spdk/idxd.h 00:10:24.684 CC test/thread/poller_perf/poller_perf.o 00:10:24.684 TEST_HEADER include/spdk/idxd_spec.h 00:10:24.684 TEST_HEADER include/spdk/init.h 00:10:24.684 TEST_HEADER include/spdk/ioat.h 00:10:24.684 TEST_HEADER include/spdk/ioat_spec.h 00:10:24.684 TEST_HEADER include/spdk/iscsi_spec.h 00:10:24.684 TEST_HEADER include/spdk/json.h 00:10:24.684 TEST_HEADER include/spdk/jsonrpc.h 00:10:24.684 TEST_HEADER include/spdk/keyring.h 00:10:24.684 CC examples/util/zipf/zipf.o 00:10:24.685 TEST_HEADER include/spdk/keyring_module.h 00:10:24.685 TEST_HEADER include/spdk/likely.h 00:10:24.685 TEST_HEADER include/spdk/log.h 00:10:24.685 TEST_HEADER include/spdk/lvol.h 00:10:24.685 TEST_HEADER include/spdk/md5.h 00:10:24.685 TEST_HEADER include/spdk/memory.h 00:10:24.685 TEST_HEADER include/spdk/mmio.h 00:10:24.685 TEST_HEADER include/spdk/nbd.h 00:10:24.685 TEST_HEADER include/spdk/net.h 00:10:24.685 TEST_HEADER include/spdk/notify.h 00:10:24.685 TEST_HEADER include/spdk/nvme.h 00:10:24.685 TEST_HEADER include/spdk/nvme_intel.h 00:10:24.685 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:24.685 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:24.685 TEST_HEADER include/spdk/nvme_spec.h 00:10:24.685 TEST_HEADER include/spdk/nvme_zns.h 00:10:24.685 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:24.685 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:24.685 TEST_HEADER include/spdk/nvmf.h 00:10:24.685 TEST_HEADER include/spdk/nvmf_spec.h 00:10:24.685 TEST_HEADER include/spdk/nvmf_transport.h 00:10:24.685 TEST_HEADER include/spdk/opal.h 00:10:24.685 TEST_HEADER include/spdk/opal_spec.h 00:10:24.685 TEST_HEADER include/spdk/pci_ids.h 00:10:24.685 TEST_HEADER include/spdk/pipe.h 00:10:24.685 TEST_HEADER include/spdk/queue.h 00:10:24.685 TEST_HEADER include/spdk/reduce.h 00:10:24.685 TEST_HEADER include/spdk/rpc.h 00:10:24.685 TEST_HEADER include/spdk/scheduler.h 00:10:24.685 CC test/app/bdev_svc/bdev_svc.o 00:10:24.685 TEST_HEADER include/spdk/scsi.h 00:10:24.685 TEST_HEADER include/spdk/scsi_spec.h 00:10:24.685 TEST_HEADER include/spdk/sock.h 00:10:24.685 TEST_HEADER include/spdk/stdinc.h 00:10:24.685 CC test/dma/test_dma/test_dma.o 00:10:24.685 TEST_HEADER include/spdk/string.h 00:10:24.685 TEST_HEADER include/spdk/thread.h 00:10:24.685 TEST_HEADER include/spdk/trace.h 00:10:24.685 TEST_HEADER include/spdk/trace_parser.h 00:10:24.685 TEST_HEADER include/spdk/tree.h 00:10:24.685 TEST_HEADER include/spdk/ublk.h 00:10:24.685 TEST_HEADER include/spdk/util.h 00:10:24.685 TEST_HEADER include/spdk/uuid.h 00:10:24.685 TEST_HEADER include/spdk/version.h 00:10:24.685 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:24.685 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:24.685 TEST_HEADER include/spdk/vhost.h 00:10:24.685 TEST_HEADER include/spdk/vmd.h 00:10:24.685 TEST_HEADER include/spdk/xor.h 00:10:24.685 TEST_HEADER include/spdk/zipf.h 00:10:24.685 CXX test/cpp_headers/accel.o 00:10:24.685 LINK poller_perf 00:10:24.685 LINK iscsi_tgt 00:10:24.685 LINK zipf 00:10:24.685 LINK nvmf_tgt 00:10:24.685 LINK spdk_trace_record 00:10:24.942 LINK spdk_tgt 00:10:24.942 LINK bdev_svc 00:10:24.942 CXX test/cpp_headers/accel_module.o 00:10:24.942 LINK spdk_trace 00:10:24.942 CXX test/cpp_headers/assert.o 00:10:24.942 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:24.942 CC examples/ioat/perf/perf.o 00:10:24.942 CC app/spdk_lspci/spdk_lspci.o 00:10:25.199 CXX test/cpp_headers/barrier.o 00:10:25.199 CC app/spdk_nvme_perf/perf.o 00:10:25.199 CC test/env/mem_callbacks/mem_callbacks.o 00:10:25.199 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:25.199 CC examples/thread/thread/thread_ex.o 00:10:25.199 LINK spdk_lspci 00:10:25.199 CC test/app/histogram_perf/histogram_perf.o 00:10:25.199 LINK interrupt_tgt 00:10:25.199 CXX test/cpp_headers/base64.o 00:10:25.199 LINK test_dma 00:10:25.199 LINK ioat_perf 00:10:25.456 LINK histogram_perf 00:10:25.456 CXX test/cpp_headers/bdev.o 00:10:25.456 LINK thread 00:10:25.456 CC app/spdk_nvme_identify/identify.o 00:10:25.456 CXX test/cpp_headers/bdev_module.o 00:10:25.456 CC examples/ioat/verify/verify.o 00:10:25.456 CXX test/cpp_headers/bdev_zone.o 00:10:25.456 CC test/event/event_perf/event_perf.o 00:10:25.714 LINK nvme_fuzz 00:10:25.714 CC app/spdk_nvme_discover/discovery_aer.o 00:10:25.714 CC app/spdk_top/spdk_top.o 00:10:25.714 CXX test/cpp_headers/bit_array.o 00:10:25.714 LINK mem_callbacks 00:10:25.714 LINK verify 00:10:25.714 LINK event_perf 00:10:25.714 CC test/env/vtophys/vtophys.o 00:10:25.714 CXX test/cpp_headers/bit_pool.o 00:10:25.714 LINK spdk_nvme_perf 00:10:25.971 LINK spdk_nvme_discover 00:10:25.971 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:25.971 LINK vtophys 00:10:25.971 CXX test/cpp_headers/blob_bdev.o 00:10:25.971 CC test/event/reactor/reactor.o 00:10:25.971 CC examples/sock/hello_world/hello_sock.o 00:10:25.971 CC examples/vmd/lsvmd/lsvmd.o 00:10:25.971 LINK reactor 00:10:25.971 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:26.229 LINK lsvmd 00:10:26.229 CXX test/cpp_headers/blobfs_bdev.o 00:10:26.229 CC examples/idxd/perf/perf.o 00:10:26.229 CC examples/fsdev/hello_world/hello_fsdev.o 00:10:26.229 LINK hello_sock 00:10:26.229 LINK spdk_nvme_identify 00:10:26.229 LINK env_dpdk_post_init 00:10:26.229 CXX test/cpp_headers/blobfs.o 00:10:26.229 CC test/event/reactor_perf/reactor_perf.o 00:10:26.229 CXX test/cpp_headers/blob.o 00:10:26.229 CC examples/vmd/led/led.o 00:10:26.486 LINK idxd_perf 00:10:26.486 CC test/env/memory/memory_ut.o 00:10:26.486 LINK hello_fsdev 00:10:26.486 LINK reactor_perf 00:10:26.486 CXX test/cpp_headers/conf.o 00:10:26.486 LINK led 00:10:26.486 CC examples/accel/perf/accel_perf.o 00:10:26.743 LINK spdk_top 00:10:26.743 CC examples/blob/hello_world/hello_blob.o 00:10:26.743 CXX test/cpp_headers/config.o 00:10:26.743 CXX test/cpp_headers/cpuset.o 00:10:26.743 CC test/env/pci/pci_ut.o 00:10:26.743 CC test/event/app_repeat/app_repeat.o 00:10:26.743 CC examples/nvme/hello_world/hello_world.o 00:10:26.743 CC app/vhost/vhost.o 00:10:26.743 CXX test/cpp_headers/crc16.o 00:10:26.743 CC test/event/scheduler/scheduler.o 00:10:27.003 LINK app_repeat 00:10:27.003 LINK hello_blob 00:10:27.003 LINK vhost 00:10:27.003 CXX test/cpp_headers/crc32.o 00:10:27.003 LINK hello_world 00:10:27.003 LINK pci_ut 00:10:27.003 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:27.003 LINK accel_perf 00:10:27.003 LINK scheduler 00:10:27.262 CXX test/cpp_headers/crc64.o 00:10:27.262 CC examples/blob/cli/blobcli.o 00:10:27.262 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:27.262 CXX test/cpp_headers/dif.o 00:10:27.262 CC examples/nvme/reconnect/reconnect.o 00:10:27.262 CC app/spdk_dd/spdk_dd.o 00:10:27.262 CC test/app/jsoncat/jsoncat.o 00:10:27.262 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:27.519 CXX test/cpp_headers/dma.o 00:10:27.519 CC examples/bdev/hello_world/hello_bdev.o 00:10:27.519 LINK jsoncat 00:10:27.519 CXX test/cpp_headers/endian.o 00:10:27.519 LINK iscsi_fuzz 00:10:27.519 LINK reconnect 00:10:27.519 LINK vhost_fuzz 00:10:27.776 LINK spdk_dd 00:10:27.776 LINK hello_bdev 00:10:27.776 LINK blobcli 00:10:27.776 LINK memory_ut 00:10:27.776 CXX test/cpp_headers/env_dpdk.o 00:10:27.776 CC examples/bdev/bdevperf/bdevperf.o 00:10:27.776 CC test/app/stub/stub.o 00:10:27.776 CC examples/nvme/arbitration/arbitration.o 00:10:27.776 CXX test/cpp_headers/env.o 00:10:27.776 LINK nvme_manage 00:10:28.056 CC app/fio/nvme/fio_plugin.o 00:10:28.056 CC examples/nvme/hotplug/hotplug.o 00:10:28.056 CC examples/nvme/abort/abort.o 00:10:28.056 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:28.056 CC app/fio/bdev/fio_plugin.o 00:10:28.056 CXX test/cpp_headers/event.o 00:10:28.056 LINK stub 00:10:28.056 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:28.056 LINK cmb_copy 00:10:28.056 CXX test/cpp_headers/fd_group.o 00:10:28.056 CXX test/cpp_headers/fd.o 00:10:28.056 LINK hotplug 00:10:28.319 LINK arbitration 00:10:28.319 LINK pmr_persistence 00:10:28.319 CXX test/cpp_headers/file.o 00:10:28.319 LINK abort 00:10:28.319 CC test/rpc_client/rpc_client_test.o 00:10:28.319 CXX test/cpp_headers/fsdev.o 00:10:28.319 CXX test/cpp_headers/fsdev_module.o 00:10:28.319 CC test/accel/dif/dif.o 00:10:28.319 CXX test/cpp_headers/ftl.o 00:10:28.575 LINK spdk_nvme 00:10:28.575 LINK spdk_bdev 00:10:28.575 CC test/blobfs/mkfs/mkfs.o 00:10:28.575 LINK rpc_client_test 00:10:28.575 CXX test/cpp_headers/fuse_dispatcher.o 00:10:28.575 CXX test/cpp_headers/gpt_spec.o 00:10:28.575 CXX test/cpp_headers/hexlify.o 00:10:28.575 LINK bdevperf 00:10:28.575 CC test/nvme/aer/aer.o 00:10:28.575 CXX test/cpp_headers/histogram_data.o 00:10:28.575 CC test/lvol/esnap/esnap.o 00:10:28.831 LINK mkfs 00:10:28.832 CXX test/cpp_headers/idxd.o 00:10:28.832 CXX test/cpp_headers/idxd_spec.o 00:10:28.832 CXX test/cpp_headers/init.o 00:10:28.832 CC test/nvme/reset/reset.o 00:10:28.832 CXX test/cpp_headers/ioat.o 00:10:28.832 CXX test/cpp_headers/ioat_spec.o 00:10:28.832 CXX test/cpp_headers/iscsi_spec.o 00:10:28.832 CXX test/cpp_headers/json.o 00:10:28.832 CXX test/cpp_headers/jsonrpc.o 00:10:28.832 LINK aer 00:10:28.832 CC test/nvme/sgl/sgl.o 00:10:29.088 CC examples/nvmf/nvmf/nvmf.o 00:10:29.089 LINK reset 00:10:29.089 CXX test/cpp_headers/keyring.o 00:10:29.089 CXX test/cpp_headers/keyring_module.o 00:10:29.089 CXX test/cpp_headers/likely.o 00:10:29.089 CC test/nvme/overhead/overhead.o 00:10:29.089 CC test/nvme/e2edp/nvme_dp.o 00:10:29.089 CXX test/cpp_headers/log.o 00:10:29.089 LINK dif 00:10:29.089 CXX test/cpp_headers/lvol.o 00:10:29.089 CXX test/cpp_headers/md5.o 00:10:29.346 LINK sgl 00:10:29.346 CC test/nvme/err_injection/err_injection.o 00:10:29.346 LINK nvmf 00:10:29.346 CXX test/cpp_headers/mmio.o 00:10:29.346 CXX test/cpp_headers/memory.o 00:10:29.346 CXX test/cpp_headers/nbd.o 00:10:29.346 LINK nvme_dp 00:10:29.346 CXX test/cpp_headers/net.o 00:10:29.346 LINK err_injection 00:10:29.346 CC test/nvme/startup/startup.o 00:10:29.346 CXX test/cpp_headers/notify.o 00:10:29.346 LINK overhead 00:10:29.346 CXX test/cpp_headers/nvme.o 00:10:29.603 CC test/nvme/reserve/reserve.o 00:10:29.603 CC test/bdev/bdevio/bdevio.o 00:10:29.603 CXX test/cpp_headers/nvme_intel.o 00:10:29.603 CXX test/cpp_headers/nvme_ocssd.o 00:10:29.603 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:29.603 LINK startup 00:10:29.603 CC test/nvme/simple_copy/simple_copy.o 00:10:29.603 CC test/nvme/connect_stress/connect_stress.o 00:10:29.603 CXX test/cpp_headers/nvme_spec.o 00:10:29.603 CC test/nvme/boot_partition/boot_partition.o 00:10:29.860 CXX test/cpp_headers/nvme_zns.o 00:10:29.860 CXX test/cpp_headers/nvmf_cmd.o 00:10:29.860 LINK reserve 00:10:29.860 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:29.860 LINK connect_stress 00:10:29.860 CC test/nvme/compliance/nvme_compliance.o 00:10:29.860 LINK boot_partition 00:10:29.860 LINK simple_copy 00:10:29.860 CXX test/cpp_headers/nvmf.o 00:10:29.860 LINK bdevio 00:10:29.860 CXX test/cpp_headers/nvmf_spec.o 00:10:29.860 CC test/nvme/fused_ordering/fused_ordering.o 00:10:30.117 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:30.117 CC test/nvme/fdp/fdp.o 00:10:30.117 CXX test/cpp_headers/nvmf_transport.o 00:10:30.117 CXX test/cpp_headers/opal.o 00:10:30.117 CXX test/cpp_headers/opal_spec.o 00:10:30.117 CXX test/cpp_headers/pci_ids.o 00:10:30.117 CC test/nvme/cuse/cuse.o 00:10:30.117 LINK fused_ordering 00:10:30.117 CXX test/cpp_headers/pipe.o 00:10:30.117 LINK doorbell_aers 00:10:30.117 LINK nvme_compliance 00:10:30.117 CXX test/cpp_headers/queue.o 00:10:30.374 CXX test/cpp_headers/reduce.o 00:10:30.374 CXX test/cpp_headers/rpc.o 00:10:30.374 CXX test/cpp_headers/scheduler.o 00:10:30.374 CXX test/cpp_headers/scsi.o 00:10:30.374 CXX test/cpp_headers/scsi_spec.o 00:10:30.374 CXX test/cpp_headers/sock.o 00:10:30.374 CXX test/cpp_headers/stdinc.o 00:10:30.374 LINK fdp 00:10:30.374 CXX test/cpp_headers/string.o 00:10:30.374 CXX test/cpp_headers/thread.o 00:10:30.374 CXX test/cpp_headers/trace.o 00:10:30.374 CXX test/cpp_headers/trace_parser.o 00:10:30.630 CXX test/cpp_headers/tree.o 00:10:30.630 CXX test/cpp_headers/ublk.o 00:10:30.630 CXX test/cpp_headers/util.o 00:10:30.630 CXX test/cpp_headers/uuid.o 00:10:30.630 CXX test/cpp_headers/version.o 00:10:30.630 CXX test/cpp_headers/vfio_user_pci.o 00:10:30.630 CXX test/cpp_headers/vfio_user_spec.o 00:10:30.630 CXX test/cpp_headers/vhost.o 00:10:30.630 CXX test/cpp_headers/vmd.o 00:10:30.630 CXX test/cpp_headers/xor.o 00:10:30.630 CXX test/cpp_headers/zipf.o 00:10:31.562 LINK cuse 00:10:34.086 LINK esnap 00:10:34.343 00:10:34.343 real 1m10.773s 00:10:34.343 user 5m42.181s 00:10:34.344 sys 0m59.214s 00:10:34.344 12:41:34 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:10:34.344 12:41:34 make -- common/autotest_common.sh@10 -- $ set +x 00:10:34.344 ************************************ 00:10:34.344 END TEST make 00:10:34.344 ************************************ 00:10:34.344 12:41:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:34.344 12:41:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:34.344 12:41:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:34.344 12:41:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:34.344 12:41:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:34.344 12:41:34 -- pm/common@44 -- $ pid=5809 00:10:34.344 12:41:34 -- pm/common@50 -- $ kill -TERM 5809 00:10:34.344 12:41:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:34.344 12:41:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:34.344 12:41:34 -- pm/common@44 -- $ pid=5810 00:10:34.344 12:41:34 -- pm/common@50 -- $ kill -TERM 5810 00:10:34.344 12:41:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:10:34.344 12:41:34 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:34.344 12:41:34 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:34.344 12:41:34 -- common/autotest_common.sh@1711 -- # lcov --version 00:10:34.344 12:41:34 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:34.602 12:41:34 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:34.602 12:41:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.602 12:41:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.602 12:41:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.602 12:41:34 -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.602 12:41:34 -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.602 12:41:34 -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.602 12:41:34 -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.602 12:41:34 -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.602 12:41:34 -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.602 12:41:34 -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.602 12:41:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.602 12:41:34 -- scripts/common.sh@344 -- # case "$op" in 00:10:34.602 12:41:34 -- scripts/common.sh@345 -- # : 1 00:10:34.602 12:41:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.602 12:41:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.602 12:41:34 -- scripts/common.sh@365 -- # decimal 1 00:10:34.602 12:41:34 -- scripts/common.sh@353 -- # local d=1 00:10:34.602 12:41:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.602 12:41:34 -- scripts/common.sh@355 -- # echo 1 00:10:34.602 12:41:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.602 12:41:34 -- scripts/common.sh@366 -- # decimal 2 00:10:34.602 12:41:34 -- scripts/common.sh@353 -- # local d=2 00:10:34.602 12:41:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.602 12:41:34 -- scripts/common.sh@355 -- # echo 2 00:10:34.602 12:41:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.602 12:41:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.602 12:41:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.602 12:41:34 -- scripts/common.sh@368 -- # return 0 00:10:34.602 12:41:34 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.602 12:41:34 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:34.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.602 --rc genhtml_branch_coverage=1 00:10:34.602 --rc genhtml_function_coverage=1 00:10:34.602 --rc genhtml_legend=1 00:10:34.602 --rc geninfo_all_blocks=1 00:10:34.602 --rc geninfo_unexecuted_blocks=1 00:10:34.602 00:10:34.602 ' 00:10:34.602 12:41:34 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:34.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.602 --rc genhtml_branch_coverage=1 00:10:34.602 --rc genhtml_function_coverage=1 00:10:34.602 --rc genhtml_legend=1 00:10:34.602 --rc geninfo_all_blocks=1 00:10:34.602 --rc geninfo_unexecuted_blocks=1 00:10:34.602 00:10:34.602 ' 00:10:34.602 12:41:34 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:34.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.602 --rc genhtml_branch_coverage=1 00:10:34.602 --rc genhtml_function_coverage=1 00:10:34.602 --rc genhtml_legend=1 00:10:34.602 --rc geninfo_all_blocks=1 00:10:34.602 --rc geninfo_unexecuted_blocks=1 00:10:34.602 00:10:34.602 ' 00:10:34.602 12:41:34 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:34.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.602 --rc genhtml_branch_coverage=1 00:10:34.602 --rc genhtml_function_coverage=1 00:10:34.602 --rc genhtml_legend=1 00:10:34.602 --rc geninfo_all_blocks=1 00:10:34.602 --rc geninfo_unexecuted_blocks=1 00:10:34.602 00:10:34.602 ' 00:10:34.602 12:41:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:34.602 12:41:34 -- nvmf/common.sh@7 -- # uname -s 00:10:34.602 12:41:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.602 12:41:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.602 12:41:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.602 12:41:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.602 12:41:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.602 12:41:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.602 12:41:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.602 12:41:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.602 12:41:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.602 12:41:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.602 12:41:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f3c36401-3e16-41f5-8c92-eb65c167cf60 00:10:34.602 12:41:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=f3c36401-3e16-41f5-8c92-eb65c167cf60 00:10:34.602 12:41:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.602 12:41:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.602 12:41:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:34.602 12:41:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.602 12:41:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.602 12:41:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.602 12:41:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.602 12:41:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.602 12:41:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.602 12:41:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.602 12:41:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.602 12:41:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.602 12:41:34 -- paths/export.sh@5 -- # export PATH 00:10:34.602 12:41:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.602 12:41:34 -- nvmf/common.sh@51 -- # : 0 00:10:34.602 12:41:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.602 12:41:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.602 12:41:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.602 12:41:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.602 12:41:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.602 12:41:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.602 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.602 12:41:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.602 12:41:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.602 12:41:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.602 12:41:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:34.602 12:41:34 -- spdk/autotest.sh@32 -- # uname -s 00:10:34.602 12:41:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:34.602 12:41:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:34.602 12:41:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:34.602 12:41:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:34.602 12:41:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:34.602 12:41:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:34.602 12:41:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:34.602 12:41:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:34.602 12:41:34 -- spdk/autotest.sh@48 -- # udevadm_pid=66731 00:10:34.602 12:41:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:34.602 12:41:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:34.602 12:41:34 -- pm/common@17 -- # local monitor 00:10:34.602 12:41:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:34.602 12:41:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:34.602 12:41:34 -- pm/common@25 -- # sleep 1 00:10:34.602 12:41:34 -- pm/common@21 -- # date +%s 00:10:34.602 12:41:34 -- pm/common@21 -- # date +%s 00:10:34.602 12:41:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733402494 00:10:34.603 12:41:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733402494 00:10:34.603 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733402494_collect-vmstat.pm.log 00:10:34.603 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733402494_collect-cpu-load.pm.log 00:10:35.535 12:41:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:35.535 12:41:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:35.535 12:41:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.535 12:41:35 -- common/autotest_common.sh@10 -- # set +x 00:10:35.535 12:41:35 -- spdk/autotest.sh@59 -- # create_test_list 00:10:35.535 12:41:35 -- common/autotest_common.sh@752 -- # xtrace_disable 00:10:35.535 12:41:35 -- common/autotest_common.sh@10 -- # set +x 00:10:35.535 12:41:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:35.535 12:41:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:35.792 12:41:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:35.792 12:41:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:35.792 12:41:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:35.792 12:41:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:35.792 12:41:35 -- common/autotest_common.sh@1457 -- # uname 00:10:35.792 12:41:35 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:10:35.792 12:41:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:35.792 12:41:35 -- common/autotest_common.sh@1477 -- # uname 00:10:35.792 12:41:35 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:10:35.792 12:41:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:10:35.792 12:41:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:10:35.792 lcov: LCOV version 1.15 00:10:35.792 12:41:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:50.675 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:50.676 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:11:05.537 12:42:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:11:05.537 12:42:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.537 12:42:03 -- common/autotest_common.sh@10 -- # set +x 00:11:05.537 12:42:03 -- spdk/autotest.sh@78 -- # rm -f 00:11:05.537 12:42:03 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:05.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.537 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:05.537 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:05.537 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:11:05.537 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:11:05.537 12:42:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:11:05.537 12:42:04 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:05.537 12:42:04 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:05.537 12:42:04 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:11:05.537 12:42:04 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:11:05.537 12:42:04 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:11:05.537 12:42:04 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:11:05.537 12:42:04 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:11:05.537 12:42:04 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:05.537 12:42:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:11:05.537 12:42:04 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:11:05.537 12:42:04 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:11:05.537 12:42:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:11:05.537 12:42:04 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:11:05.537 12:42:04 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:11:05.537 12:42:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:11:05.537 12:42:04 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:11:05.537 12:42:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:11:05.537 12:42:04 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:11:05.537 12:42:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:11:05.537 12:42:04 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:05.537 12:42:04 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:11:05.537 12:42:04 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:11:05.537 12:42:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:05.537 12:42:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:05.537 12:42:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:11:05.537 12:42:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:05.537 12:42:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:05.537 12:42:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:11:05.537 12:42:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:05.537 12:42:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:05.537 No valid GPT data, bailing 00:11:05.537 12:42:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:05.537 12:42:04 -- scripts/common.sh@394 -- # pt= 00:11:05.537 12:42:04 -- scripts/common.sh@395 -- # return 1 00:11:05.537 12:42:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:05.537 1+0 records in 00:11:05.537 1+0 records out 00:11:05.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322718 s, 32.5 MB/s 00:11:05.537 12:42:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:05.537 12:42:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:05.537 12:42:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:11:05.537 12:42:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:11:05.537 12:42:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:05.537 No valid GPT data, bailing 00:11:05.537 12:42:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:05.537 12:42:04 -- scripts/common.sh@394 -- # pt= 00:11:05.537 12:42:04 -- scripts/common.sh@395 -- # return 1 00:11:05.537 12:42:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:05.537 1+0 records in 00:11:05.537 1+0 records out 00:11:05.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593437 s, 177 MB/s 00:11:05.537 12:42:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:05.537 12:42:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:05.537 12:42:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:11:05.537 12:42:04 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:11:05.537 12:42:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:11:05.537 No valid GPT data, bailing 00:11:05.537 12:42:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:11:05.537 12:42:05 -- scripts/common.sh@394 -- # pt= 00:11:05.537 12:42:05 -- scripts/common.sh@395 -- # return 1 00:11:05.537 12:42:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:11:05.537 1+0 records in 00:11:05.537 1+0 records out 00:11:05.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511755 s, 205 MB/s 00:11:05.537 12:42:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:05.537 12:42:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:05.537 12:42:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:11:05.537 12:42:05 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:11:05.537 12:42:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:11:05.537 No valid GPT data, bailing 00:11:05.537 12:42:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:11:05.537 12:42:05 -- scripts/common.sh@394 -- # pt= 00:11:05.537 12:42:05 -- scripts/common.sh@395 -- # return 1 00:11:05.537 12:42:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:11:05.537 1+0 records in 00:11:05.537 1+0 records out 00:11:05.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572789 s, 183 MB/s 00:11:05.537 12:42:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:05.537 12:42:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:05.537 12:42:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:11:05.537 12:42:05 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:11:05.537 12:42:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:11:05.537 No valid GPT data, bailing 00:11:05.537 12:42:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:11:05.537 12:42:05 -- scripts/common.sh@394 -- # pt= 00:11:05.537 12:42:05 -- scripts/common.sh@395 -- # return 1 00:11:05.537 12:42:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:11:05.537 1+0 records in 00:11:05.537 1+0 records out 00:11:05.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450024 s, 233 MB/s 00:11:05.537 12:42:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:05.537 12:42:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:05.537 12:42:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:11:05.537 12:42:05 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:11:05.537 12:42:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:11:05.537 No valid GPT data, bailing 00:11:05.538 12:42:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:11:05.538 12:42:05 -- scripts/common.sh@394 -- # pt= 00:11:05.538 12:42:05 -- scripts/common.sh@395 -- # return 1 00:11:05.538 12:42:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:11:05.538 1+0 records in 00:11:05.538 1+0 records out 00:11:05.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511288 s, 205 MB/s 00:11:05.538 12:42:05 -- spdk/autotest.sh@105 -- # sync 00:11:05.538 12:42:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:05.538 12:42:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:05.538 12:42:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:07.435 12:42:06 -- spdk/autotest.sh@111 -- # uname -s 00:11:07.435 12:42:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:11:07.435 12:42:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:11:07.435 12:42:06 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:07.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.255 Hugepages 00:11:08.255 node hugesize free / total 00:11:08.255 node0 1048576kB 0 / 0 00:11:08.255 node0 2048kB 0 / 0 00:11:08.255 00:11:08.255 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:08.255 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:08.255 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:08.255 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:08.512 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:11:08.512 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:11:08.512 12:42:08 -- spdk/autotest.sh@117 -- # uname -s 00:11:08.512 12:42:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:11:08.512 12:42:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:11:08.512 12:42:08 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:09.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:09.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.700 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.956 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:09.956 12:42:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:11:10.886 12:42:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:11:10.886 12:42:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:11:10.886 12:42:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:10.886 12:42:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:10.886 12:42:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:10.886 12:42:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:10.886 12:42:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:10.886 12:42:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:10.886 12:42:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:11.168 12:42:10 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:11.168 12:42:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:11.168 12:42:10 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:11.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.425 Waiting for block devices as requested 00:11:11.425 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.683 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.683 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.683 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:16.940 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:16.940 12:42:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:16.940 12:42:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:16.940 12:42:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:11:16.940 12:42:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:16.940 12:42:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:16.940 12:42:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:16.940 12:42:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:16.940 12:42:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:11:16.940 12:42:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:11:16.940 12:42:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:11:16.940 12:42:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:11:16.940 12:42:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:16.940 12:42:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:16.940 12:42:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:16.940 12:42:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:16.940 12:42:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:16.940 12:42:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:16.940 12:42:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:11:16.940 12:42:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:16.940 12:42:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:16.940 12:42:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:16.940 12:42:16 -- common/autotest_common.sh@1543 -- # continue 00:11:16.940 12:42:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:16.940 12:42:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:16.940 12:42:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:16.940 12:42:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:11:16.940 12:42:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:16.940 12:42:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:16.940 12:42:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:16.940 12:42:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:11:16.940 12:42:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:11:16.940 12:42:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:16.941 12:42:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:16.941 12:42:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:16.941 12:42:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1543 -- # continue 00:11:16.941 12:42:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:16.941 12:42:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:11:16.941 12:42:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:16.941 12:42:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:11:16.941 12:42:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:16.941 12:42:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:16.941 12:42:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:16.941 12:42:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1543 -- # continue 00:11:16.941 12:42:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:16.941 12:42:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:11:16.941 12:42:16 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:11:16.941 12:42:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:16.941 12:42:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:11:16.941 12:42:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:11:16.941 12:42:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:11:16.941 12:42:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:11:16.941 12:42:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:16.941 12:42:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:16.941 12:42:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:16.941 12:42:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:16.941 12:42:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:16.941 12:42:16 -- common/autotest_common.sh@1543 -- # continue 00:11:16.941 12:42:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:11:16.941 12:42:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.941 12:42:16 -- common/autotest_common.sh@10 -- # set +x 00:11:16.941 12:42:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:11:16.941 12:42:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.941 12:42:16 -- common/autotest_common.sh@10 -- # set +x 00:11:16.941 12:42:16 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:17.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.812 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.812 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.812 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.812 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.812 12:42:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:11:17.812 12:42:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.812 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:11:17.812 12:42:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:11:17.812 12:42:17 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:11:17.812 12:42:17 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:11:17.812 12:42:17 -- common/autotest_common.sh@1563 -- # bdfs=() 00:11:17.812 12:42:17 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:11:17.812 12:42:17 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:11:17.812 12:42:17 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:11:17.812 12:42:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:11:17.812 12:42:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:17.812 12:42:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:17.812 12:42:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:17.812 12:42:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:17.812 12:42:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:18.071 12:42:17 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:18.071 12:42:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:18.071 12:42:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:18.071 12:42:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:18.071 12:42:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:18.071 12:42:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:18.071 12:42:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:18.071 12:42:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:18.071 12:42:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:18.071 12:42:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:18.071 12:42:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:18.071 12:42:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:11:18.071 12:42:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:18.071 12:42:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:18.071 12:42:17 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:18.071 12:42:17 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:11:18.071 12:42:17 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:18.071 12:42:17 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:18.071 12:42:17 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:11:18.071 12:42:17 -- common/autotest_common.sh@1572 -- # return 0 00:11:18.071 12:42:17 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:11:18.071 12:42:17 -- common/autotest_common.sh@1580 -- # return 0 00:11:18.071 12:42:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:11:18.071 12:42:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:11:18.071 12:42:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:11:18.071 12:42:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:11:18.071 12:42:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:11:18.071 12:42:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.071 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:11:18.071 12:42:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:11:18.071 12:42:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:18.071 12:42:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.071 12:42:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.071 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:11:18.071 ************************************ 00:11:18.071 START TEST env 00:11:18.071 ************************************ 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:18.071 * Looking for test storage... 00:11:18.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.071 12:42:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.071 12:42:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.071 12:42:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.071 12:42:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.071 12:42:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.071 12:42:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.071 12:42:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.071 12:42:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.071 12:42:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.071 12:42:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.071 12:42:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.071 12:42:17 env -- scripts/common.sh@344 -- # case "$op" in 00:11:18.071 12:42:17 env -- scripts/common.sh@345 -- # : 1 00:11:18.071 12:42:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.071 12:42:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.071 12:42:17 env -- scripts/common.sh@365 -- # decimal 1 00:11:18.071 12:42:17 env -- scripts/common.sh@353 -- # local d=1 00:11:18.071 12:42:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.071 12:42:17 env -- scripts/common.sh@355 -- # echo 1 00:11:18.071 12:42:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.071 12:42:17 env -- scripts/common.sh@366 -- # decimal 2 00:11:18.071 12:42:17 env -- scripts/common.sh@353 -- # local d=2 00:11:18.071 12:42:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.071 12:42:17 env -- scripts/common.sh@355 -- # echo 2 00:11:18.071 12:42:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.071 12:42:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.071 12:42:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.071 12:42:17 env -- scripts/common.sh@368 -- # return 0 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:18.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.071 --rc genhtml_branch_coverage=1 00:11:18.071 --rc genhtml_function_coverage=1 00:11:18.071 --rc genhtml_legend=1 00:11:18.071 --rc geninfo_all_blocks=1 00:11:18.071 --rc geninfo_unexecuted_blocks=1 00:11:18.071 00:11:18.071 ' 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:18.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.071 --rc genhtml_branch_coverage=1 00:11:18.071 --rc genhtml_function_coverage=1 00:11:18.071 --rc genhtml_legend=1 00:11:18.071 --rc geninfo_all_blocks=1 00:11:18.071 --rc geninfo_unexecuted_blocks=1 00:11:18.071 00:11:18.071 ' 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:18.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.071 --rc genhtml_branch_coverage=1 00:11:18.071 --rc genhtml_function_coverage=1 00:11:18.071 --rc genhtml_legend=1 00:11:18.071 --rc geninfo_all_blocks=1 00:11:18.071 --rc geninfo_unexecuted_blocks=1 00:11:18.071 00:11:18.071 ' 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:18.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.071 --rc genhtml_branch_coverage=1 00:11:18.071 --rc genhtml_function_coverage=1 00:11:18.071 --rc genhtml_legend=1 00:11:18.071 --rc geninfo_all_blocks=1 00:11:18.071 --rc geninfo_unexecuted_blocks=1 00:11:18.071 00:11:18.071 ' 00:11:18.071 12:42:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.071 12:42:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.071 12:42:17 env -- common/autotest_common.sh@10 -- # set +x 00:11:18.071 ************************************ 00:11:18.071 START TEST env_memory 00:11:18.071 ************************************ 00:11:18.071 12:42:17 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:18.071 00:11:18.071 00:11:18.071 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.071 http://cunit.sourceforge.net/ 00:11:18.071 00:11:18.071 00:11:18.071 Suite: memory 00:11:18.328 Test: alloc and free memory map ...[2024-12-05 12:42:17.945827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:18.328 passed 00:11:18.328 Test: mem map translation ...[2024-12-05 12:42:17.985492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:18.328 [2024-12-05 12:42:17.985725] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:18.328 [2024-12-05 12:42:17.985946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:18.328 [2024-12-05 12:42:17.986065] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:18.328 passed 00:11:18.328 Test: mem map registration ...[2024-12-05 12:42:18.055178] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:11:18.328 [2024-12-05 12:42:18.055341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:11:18.328 passed 00:11:18.328 Test: mem map adjacent registrations ...passed 00:11:18.328 00:11:18.328 Run Summary: Type Total Ran Passed Failed Inactive 00:11:18.328 suites 1 1 n/a 0 0 00:11:18.328 tests 4 4 4 0 0 00:11:18.328 asserts 152 152 152 0 n/a 00:11:18.328 00:11:18.328 Elapsed time = 0.236 seconds 00:11:18.328 00:11:18.328 real 0m0.270s 00:11:18.328 user 0m0.249s 00:11:18.328 sys 0m0.013s 00:11:18.328 12:42:18 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.328 12:42:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:18.328 ************************************ 00:11:18.328 END TEST env_memory 00:11:18.328 ************************************ 00:11:18.584 12:42:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:18.584 12:42:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.584 12:42:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.584 12:42:18 env -- common/autotest_common.sh@10 -- # set +x 00:11:18.584 ************************************ 00:11:18.584 START TEST env_vtophys 00:11:18.584 ************************************ 00:11:18.584 12:42:18 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:18.584 EAL: lib.eal log level changed from notice to debug 00:11:18.584 EAL: Detected lcore 0 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 1 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 2 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 3 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 4 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 5 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 6 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 7 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 8 as core 0 on socket 0 00:11:18.584 EAL: Detected lcore 9 as core 0 on socket 0 00:11:18.584 EAL: Maximum logical cores by configuration: 128 00:11:18.584 EAL: Detected CPU lcores: 10 00:11:18.584 EAL: Detected NUMA nodes: 1 00:11:18.584 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:11:18.584 EAL: Detected shared linkage of DPDK 00:11:18.584 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:11:18.584 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:11:18.584 EAL: Registered [vdev] bus. 00:11:18.584 EAL: bus.vdev log level changed from disabled to notice 00:11:18.584 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:11:18.584 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:11:18.584 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:11:18.584 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:11:18.584 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:11:18.584 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:11:18.584 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:11:18.584 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:11:18.584 EAL: No shared files mode enabled, IPC will be disabled 00:11:18.584 EAL: No shared files mode enabled, IPC is disabled 00:11:18.584 EAL: Selected IOVA mode 'PA' 00:11:18.584 EAL: Probing VFIO support... 00:11:18.584 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:18.584 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:18.584 EAL: Ask a virtual area of 0x2e000 bytes 00:11:18.584 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:18.584 EAL: Setting up physically contiguous memory... 00:11:18.584 EAL: Setting maximum number of open files to 524288 00:11:18.584 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:18.584 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:18.584 EAL: Ask a virtual area of 0x61000 bytes 00:11:18.584 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:18.584 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:18.584 EAL: Ask a virtual area of 0x400000000 bytes 00:11:18.584 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:18.584 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:18.584 EAL: Ask a virtual area of 0x61000 bytes 00:11:18.585 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:18.585 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:18.585 EAL: Ask a virtual area of 0x400000000 bytes 00:11:18.585 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:18.585 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:18.585 EAL: Ask a virtual area of 0x61000 bytes 00:11:18.585 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:18.585 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:18.585 EAL: Ask a virtual area of 0x400000000 bytes 00:11:18.585 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:18.585 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:18.585 EAL: Ask a virtual area of 0x61000 bytes 00:11:18.585 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:18.585 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:18.585 EAL: Ask a virtual area of 0x400000000 bytes 00:11:18.585 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:18.585 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:18.585 EAL: Hugepages will be freed exactly as allocated. 00:11:18.585 EAL: No shared files mode enabled, IPC is disabled 00:11:18.585 EAL: No shared files mode enabled, IPC is disabled 00:11:18.585 EAL: TSC frequency is ~2600000 KHz 00:11:18.585 EAL: Main lcore 0 is ready (tid=7fca007f9a40;cpuset=[0]) 00:11:18.585 EAL: Trying to obtain current memory policy. 00:11:18.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:18.585 EAL: Restoring previous memory policy: 0 00:11:18.585 EAL: request: mp_malloc_sync 00:11:18.585 EAL: No shared files mode enabled, IPC is disabled 00:11:18.585 EAL: Heap on socket 0 was expanded by 2MB 00:11:18.585 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:18.585 EAL: No shared files mode enabled, IPC is disabled 00:11:18.585 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:18.585 EAL: Mem event callback 'spdk:(nil)' registered 00:11:18.585 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:18.585 00:11:18.585 00:11:18.585 CUnit - A unit testing framework for C - Version 2.1-3 00:11:18.585 http://cunit.sourceforge.net/ 00:11:18.585 00:11:18.585 00:11:18.585 Suite: components_suite 00:11:19.148 Test: vtophys_malloc_test ...passed 00:11:19.148 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:19.148 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.148 EAL: Restoring previous memory policy: 4 00:11:19.148 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.148 EAL: request: mp_malloc_sync 00:11:19.148 EAL: No shared files mode enabled, IPC is disabled 00:11:19.148 EAL: Heap on socket 0 was expanded by 4MB 00:11:19.148 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.148 EAL: request: mp_malloc_sync 00:11:19.148 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was shrunk by 4MB 00:11:19.149 EAL: Trying to obtain current memory policy. 00:11:19.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.149 EAL: Restoring previous memory policy: 4 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was expanded by 6MB 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was shrunk by 6MB 00:11:19.149 EAL: Trying to obtain current memory policy. 00:11:19.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.149 EAL: Restoring previous memory policy: 4 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was expanded by 10MB 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was shrunk by 10MB 00:11:19.149 EAL: Trying to obtain current memory policy. 00:11:19.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.149 EAL: Restoring previous memory policy: 4 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was expanded by 18MB 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was shrunk by 18MB 00:11:19.149 EAL: Trying to obtain current memory policy. 00:11:19.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.149 EAL: Restoring previous memory policy: 4 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was expanded by 34MB 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was shrunk by 34MB 00:11:19.149 EAL: Trying to obtain current memory policy. 00:11:19.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.149 EAL: Restoring previous memory policy: 4 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was expanded by 66MB 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was shrunk by 66MB 00:11:19.149 EAL: Trying to obtain current memory policy. 00:11:19.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.149 EAL: Restoring previous memory policy: 4 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was expanded by 130MB 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was shrunk by 130MB 00:11:19.149 EAL: Trying to obtain current memory policy. 00:11:19.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.149 EAL: Restoring previous memory policy: 4 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was expanded by 258MB 00:11:19.149 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.149 EAL: request: mp_malloc_sync 00:11:19.149 EAL: No shared files mode enabled, IPC is disabled 00:11:19.149 EAL: Heap on socket 0 was shrunk by 258MB 00:11:19.149 EAL: Trying to obtain current memory policy. 00:11:19.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.405 EAL: Restoring previous memory policy: 4 00:11:19.405 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.405 EAL: request: mp_malloc_sync 00:11:19.405 EAL: No shared files mode enabled, IPC is disabled 00:11:19.405 EAL: Heap on socket 0 was expanded by 514MB 00:11:19.405 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.405 EAL: request: mp_malloc_sync 00:11:19.405 EAL: No shared files mode enabled, IPC is disabled 00:11:19.405 EAL: Heap on socket 0 was shrunk by 514MB 00:11:19.405 EAL: Trying to obtain current memory policy. 00:11:19.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:19.662 EAL: Restoring previous memory policy: 4 00:11:19.663 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.663 EAL: request: mp_malloc_sync 00:11:19.663 EAL: No shared files mode enabled, IPC is disabled 00:11:19.663 EAL: Heap on socket 0 was expanded by 1026MB 00:11:19.920 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.920 passed 00:11:19.920 00:11:19.920 Run Summary: Type Total Ran Passed Failed Inactive 00:11:19.920 suites 1 1 n/a 0 0 00:11:19.920 tests 2 2 2 0 0 00:11:19.920 asserts 5568 5568 5568 0 n/a 00:11:19.920 00:11:19.920 Elapsed time = 1.281 seconds 00:11:19.920 EAL: request: mp_malloc_sync 00:11:19.920 EAL: No shared files mode enabled, IPC is disabled 00:11:19.920 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:19.920 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.920 EAL: request: mp_malloc_sync 00:11:19.920 EAL: No shared files mode enabled, IPC is disabled 00:11:19.920 EAL: Heap on socket 0 was shrunk by 2MB 00:11:19.920 EAL: No shared files mode enabled, IPC is disabled 00:11:19.920 EAL: No shared files mode enabled, IPC is disabled 00:11:19.920 EAL: No shared files mode enabled, IPC is disabled 00:11:19.920 00:11:19.920 real 0m1.521s 00:11:19.920 user 0m0.629s 00:11:19.920 sys 0m0.752s 00:11:19.920 12:42:19 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.920 ************************************ 00:11:19.920 12:42:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:19.920 END TEST env_vtophys 00:11:19.920 ************************************ 00:11:19.920 12:42:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:19.920 12:42:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.920 12:42:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.920 12:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:11:19.920 ************************************ 00:11:19.920 START TEST env_pci 00:11:19.920 ************************************ 00:11:19.920 12:42:19 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:20.177 00:11:20.177 00:11:20.177 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.177 http://cunit.sourceforge.net/ 00:11:20.177 00:11:20.177 00:11:20.177 Suite: pci 00:11:20.177 Test: pci_hook ...[2024-12-05 12:42:19.790597] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69460 has claimed it 00:11:20.177 EAL: Cannot find device (10000:00:01.0) 00:11:20.177 passed 00:11:20.177 00:11:20.177 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.177 suites 1 1 n/a 0 0 00:11:20.177 tests 1 1 1 0 0 00:11:20.177 asserts 25 25 25 0 n/a 00:11:20.177 00:11:20.177 Elapsed time = 0.004 seconds 00:11:20.177 EAL: Failed to attach device on primary process 00:11:20.177 ************************************ 00:11:20.177 END TEST env_pci 00:11:20.177 ************************************ 00:11:20.177 00:11:20.177 real 0m0.048s 00:11:20.177 user 0m0.025s 00:11:20.177 sys 0m0.022s 00:11:20.177 12:42:19 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.177 12:42:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:20.177 12:42:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:20.177 12:42:19 env -- env/env.sh@15 -- # uname 00:11:20.177 12:42:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:20.177 12:42:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:20.177 12:42:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:20.177 12:42:19 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:20.177 12:42:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.177 12:42:19 env -- common/autotest_common.sh@10 -- # set +x 00:11:20.177 ************************************ 00:11:20.177 START TEST env_dpdk_post_init 00:11:20.177 ************************************ 00:11:20.177 12:42:19 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:20.177 EAL: Detected CPU lcores: 10 00:11:20.177 EAL: Detected NUMA nodes: 1 00:11:20.177 EAL: Detected shared linkage of DPDK 00:11:20.177 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:20.177 EAL: Selected IOVA mode 'PA' 00:11:20.177 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:20.434 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:20.434 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:20.434 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:11:20.434 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:11:20.434 Starting DPDK initialization... 00:11:20.434 Starting SPDK post initialization... 00:11:20.434 SPDK NVMe probe 00:11:20.434 Attaching to 0000:00:10.0 00:11:20.434 Attaching to 0000:00:11.0 00:11:20.434 Attaching to 0000:00:12.0 00:11:20.434 Attaching to 0000:00:13.0 00:11:20.434 Attached to 0000:00:11.0 00:11:20.434 Attached to 0000:00:13.0 00:11:20.434 Attached to 0000:00:10.0 00:11:20.434 Attached to 0000:00:12.0 00:11:20.434 Cleaning up... 00:11:20.434 00:11:20.434 real 0m0.230s 00:11:20.434 user 0m0.067s 00:11:20.434 sys 0m0.065s 00:11:20.434 12:42:20 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.434 ************************************ 00:11:20.434 END TEST env_dpdk_post_init 00:11:20.434 ************************************ 00:11:20.434 12:42:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:20.434 12:42:20 env -- env/env.sh@26 -- # uname 00:11:20.434 12:42:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:20.434 12:42:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:20.434 12:42:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.434 12:42:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.434 12:42:20 env -- common/autotest_common.sh@10 -- # set +x 00:11:20.434 ************************************ 00:11:20.434 START TEST env_mem_callbacks 00:11:20.434 ************************************ 00:11:20.434 12:42:20 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:20.434 EAL: Detected CPU lcores: 10 00:11:20.434 EAL: Detected NUMA nodes: 1 00:11:20.434 EAL: Detected shared linkage of DPDK 00:11:20.434 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:20.434 EAL: Selected IOVA mode 'PA' 00:11:20.692 00:11:20.692 00:11:20.692 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.692 http://cunit.sourceforge.net/ 00:11:20.692 00:11:20.692 00:11:20.692 Suite: memory 00:11:20.692 Test: test ... 00:11:20.692 register 0x200000200000 2097152 00:11:20.692 malloc 3145728 00:11:20.692 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:20.692 register 0x200000400000 4194304 00:11:20.692 buf 0x200000500000 len 3145728 PASSED 00:11:20.692 malloc 64 00:11:20.692 buf 0x2000004fff40 len 64 PASSED 00:11:20.692 malloc 4194304 00:11:20.692 register 0x200000800000 6291456 00:11:20.692 buf 0x200000a00000 len 4194304 PASSED 00:11:20.692 free 0x200000500000 3145728 00:11:20.692 free 0x2000004fff40 64 00:11:20.692 unregister 0x200000400000 4194304 PASSED 00:11:20.692 free 0x200000a00000 4194304 00:11:20.692 unregister 0x200000800000 6291456 PASSED 00:11:20.692 malloc 8388608 00:11:20.692 register 0x200000400000 10485760 00:11:20.692 buf 0x200000600000 len 8388608 PASSED 00:11:20.692 free 0x200000600000 8388608 00:11:20.692 unregister 0x200000400000 10485760 PASSED 00:11:20.692 passed 00:11:20.692 00:11:20.692 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.692 suites 1 1 n/a 0 0 00:11:20.692 tests 1 1 1 0 0 00:11:20.692 asserts 15 15 15 0 n/a 00:11:20.692 00:11:20.692 Elapsed time = 0.009 seconds 00:11:20.692 00:11:20.692 real 0m0.175s 00:11:20.692 user 0m0.031s 00:11:20.692 sys 0m0.041s 00:11:20.692 12:42:20 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.692 12:42:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:20.692 ************************************ 00:11:20.692 END TEST env_mem_callbacks 00:11:20.692 ************************************ 00:11:20.692 ************************************ 00:11:20.692 END TEST env 00:11:20.692 ************************************ 00:11:20.692 00:11:20.692 real 0m2.613s 00:11:20.692 user 0m1.149s 00:11:20.692 sys 0m1.108s 00:11:20.692 12:42:20 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.692 12:42:20 env -- common/autotest_common.sh@10 -- # set +x 00:11:20.692 12:42:20 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:20.692 12:42:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.692 12:42:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.692 12:42:20 -- common/autotest_common.sh@10 -- # set +x 00:11:20.692 ************************************ 00:11:20.692 START TEST rpc 00:11:20.692 ************************************ 00:11:20.692 12:42:20 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:20.692 * Looking for test storage... 00:11:20.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.693 12:42:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.693 12:42:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.693 12:42:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.693 12:42:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.693 12:42:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.693 12:42:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.693 12:42:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.693 12:42:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.693 12:42:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.693 12:42:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.693 12:42:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.693 12:42:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:20.693 12:42:20 rpc -- scripts/common.sh@345 -- # : 1 00:11:20.693 12:42:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.693 12:42:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.693 12:42:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:11:20.693 12:42:20 rpc -- scripts/common.sh@353 -- # local d=1 00:11:20.693 12:42:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.693 12:42:20 rpc -- scripts/common.sh@355 -- # echo 1 00:11:20.693 12:42:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.693 12:42:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:11:20.693 12:42:20 rpc -- scripts/common.sh@353 -- # local d=2 00:11:20.693 12:42:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.693 12:42:20 rpc -- scripts/common.sh@355 -- # echo 2 00:11:20.693 12:42:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.693 12:42:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.693 12:42:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.693 12:42:20 rpc -- scripts/common.sh@368 -- # return 0 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.693 --rc genhtml_branch_coverage=1 00:11:20.693 --rc genhtml_function_coverage=1 00:11:20.693 --rc genhtml_legend=1 00:11:20.693 --rc geninfo_all_blocks=1 00:11:20.693 --rc geninfo_unexecuted_blocks=1 00:11:20.693 00:11:20.693 ' 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.693 --rc genhtml_branch_coverage=1 00:11:20.693 --rc genhtml_function_coverage=1 00:11:20.693 --rc genhtml_legend=1 00:11:20.693 --rc geninfo_all_blocks=1 00:11:20.693 --rc geninfo_unexecuted_blocks=1 00:11:20.693 00:11:20.693 ' 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.693 --rc genhtml_branch_coverage=1 00:11:20.693 --rc genhtml_function_coverage=1 00:11:20.693 --rc genhtml_legend=1 00:11:20.693 --rc geninfo_all_blocks=1 00:11:20.693 --rc geninfo_unexecuted_blocks=1 00:11:20.693 00:11:20.693 ' 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.693 --rc genhtml_branch_coverage=1 00:11:20.693 --rc genhtml_function_coverage=1 00:11:20.693 --rc genhtml_legend=1 00:11:20.693 --rc geninfo_all_blocks=1 00:11:20.693 --rc geninfo_unexecuted_blocks=1 00:11:20.693 00:11:20.693 ' 00:11:20.693 12:42:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69582 00:11:20.693 12:42:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:20.693 12:42:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69582 00:11:20.693 12:42:20 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@835 -- # '[' -z 69582 ']' 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.693 12:42:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.952 [2024-12-05 12:42:20.621136] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:20.952 [2024-12-05 12:42:20.621272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69582 ] 00:11:20.952 [2024-12-05 12:42:20.779722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.211 [2024-12-05 12:42:20.804855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:21.211 [2024-12-05 12:42:20.804917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69582' to capture a snapshot of events at runtime. 00:11:21.211 [2024-12-05 12:42:20.804930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.211 [2024-12-05 12:42:20.804938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.211 [2024-12-05 12:42:20.804950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69582 for offline analysis/debug. 00:11:21.211 [2024-12-05 12:42:20.805313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.776 12:42:21 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.776 12:42:21 rpc -- common/autotest_common.sh@868 -- # return 0 00:11:21.776 12:42:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:21.776 12:42:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:21.776 12:42:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:21.776 12:42:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:21.776 12:42:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.776 12:42:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.776 12:42:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.776 ************************************ 00:11:21.776 START TEST rpc_integrity 00:11:21.776 ************************************ 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:21.776 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:21.776 { 00:11:21.776 "name": "Malloc0", 00:11:21.776 "aliases": [ 00:11:21.776 "25d2ac0c-76ad-4a76-92e9-7b975e871c04" 00:11:21.776 ], 00:11:21.776 "product_name": "Malloc disk", 00:11:21.776 "block_size": 512, 00:11:21.776 "num_blocks": 16384, 00:11:21.776 "uuid": "25d2ac0c-76ad-4a76-92e9-7b975e871c04", 00:11:21.776 "assigned_rate_limits": { 00:11:21.776 "rw_ios_per_sec": 0, 00:11:21.776 "rw_mbytes_per_sec": 0, 00:11:21.776 "r_mbytes_per_sec": 0, 00:11:21.776 "w_mbytes_per_sec": 0 00:11:21.776 }, 00:11:21.776 "claimed": false, 00:11:21.776 "zoned": false, 00:11:21.776 "supported_io_types": { 00:11:21.776 "read": true, 00:11:21.776 "write": true, 00:11:21.776 "unmap": true, 00:11:21.776 "flush": true, 00:11:21.776 "reset": true, 00:11:21.776 "nvme_admin": false, 00:11:21.776 "nvme_io": false, 00:11:21.776 "nvme_io_md": false, 00:11:21.776 "write_zeroes": true, 00:11:21.776 "zcopy": true, 00:11:21.776 "get_zone_info": false, 00:11:21.776 "zone_management": false, 00:11:21.776 "zone_append": false, 00:11:21.776 "compare": false, 00:11:21.776 "compare_and_write": false, 00:11:21.776 "abort": true, 00:11:21.776 "seek_hole": false, 00:11:21.776 "seek_data": false, 00:11:21.776 "copy": true, 00:11:21.776 "nvme_iov_md": false 00:11:21.776 }, 00:11:21.776 "memory_domains": [ 00:11:21.776 { 00:11:21.776 "dma_device_id": "system", 00:11:21.776 "dma_device_type": 1 00:11:21.776 }, 00:11:21.776 { 00:11:21.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.776 "dma_device_type": 2 00:11:21.776 } 00:11:21.776 ], 00:11:21.776 "driver_specific": {} 00:11:21.776 } 00:11:21.776 ]' 00:11:21.776 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:22.034 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:22.034 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:22.034 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.034 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.034 [2024-12-05 12:42:21.649605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:22.034 [2024-12-05 12:42:21.649686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.034 [2024-12-05 12:42:21.649723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:22.034 [2024-12-05 12:42:21.649734] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.034 [2024-12-05 12:42:21.652780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.034 [2024-12-05 12:42:21.652969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:22.034 Passthru0 00:11:22.034 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.034 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:22.034 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.034 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.034 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.034 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:22.034 { 00:11:22.034 "name": "Malloc0", 00:11:22.034 "aliases": [ 00:11:22.035 "25d2ac0c-76ad-4a76-92e9-7b975e871c04" 00:11:22.035 ], 00:11:22.035 "product_name": "Malloc disk", 00:11:22.035 "block_size": 512, 00:11:22.035 "num_blocks": 16384, 00:11:22.035 "uuid": "25d2ac0c-76ad-4a76-92e9-7b975e871c04", 00:11:22.035 "assigned_rate_limits": { 00:11:22.035 "rw_ios_per_sec": 0, 00:11:22.035 "rw_mbytes_per_sec": 0, 00:11:22.035 "r_mbytes_per_sec": 0, 00:11:22.035 "w_mbytes_per_sec": 0 00:11:22.035 }, 00:11:22.035 "claimed": true, 00:11:22.035 "claim_type": "exclusive_write", 00:11:22.035 "zoned": false, 00:11:22.035 "supported_io_types": { 00:11:22.035 "read": true, 00:11:22.035 "write": true, 00:11:22.035 "unmap": true, 00:11:22.035 "flush": true, 00:11:22.035 "reset": true, 00:11:22.035 "nvme_admin": false, 00:11:22.035 "nvme_io": false, 00:11:22.035 "nvme_io_md": false, 00:11:22.035 "write_zeroes": true, 00:11:22.035 "zcopy": true, 00:11:22.035 "get_zone_info": false, 00:11:22.035 "zone_management": false, 00:11:22.035 "zone_append": false, 00:11:22.035 "compare": false, 00:11:22.035 "compare_and_write": false, 00:11:22.035 "abort": true, 00:11:22.035 "seek_hole": false, 00:11:22.035 "seek_data": false, 00:11:22.035 "copy": true, 00:11:22.035 "nvme_iov_md": false 00:11:22.035 }, 00:11:22.035 "memory_domains": [ 00:11:22.035 { 00:11:22.035 "dma_device_id": "system", 00:11:22.035 "dma_device_type": 1 00:11:22.035 }, 00:11:22.035 { 00:11:22.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.035 "dma_device_type": 2 00:11:22.035 } 00:11:22.035 ], 00:11:22.035 "driver_specific": {} 00:11:22.035 }, 00:11:22.035 { 00:11:22.035 "name": "Passthru0", 00:11:22.035 "aliases": [ 00:11:22.035 "62ac3a73-433e-520b-a042-616ea2275b41" 00:11:22.035 ], 00:11:22.035 "product_name": "passthru", 00:11:22.035 "block_size": 512, 00:11:22.035 "num_blocks": 16384, 00:11:22.035 "uuid": "62ac3a73-433e-520b-a042-616ea2275b41", 00:11:22.035 "assigned_rate_limits": { 00:11:22.035 "rw_ios_per_sec": 0, 00:11:22.035 "rw_mbytes_per_sec": 0, 00:11:22.035 "r_mbytes_per_sec": 0, 00:11:22.035 "w_mbytes_per_sec": 0 00:11:22.035 }, 00:11:22.035 "claimed": false, 00:11:22.035 "zoned": false, 00:11:22.035 "supported_io_types": { 00:11:22.035 "read": true, 00:11:22.035 "write": true, 00:11:22.035 "unmap": true, 00:11:22.035 "flush": true, 00:11:22.035 "reset": true, 00:11:22.035 "nvme_admin": false, 00:11:22.035 "nvme_io": false, 00:11:22.035 "nvme_io_md": false, 00:11:22.035 "write_zeroes": true, 00:11:22.035 "zcopy": true, 00:11:22.035 "get_zone_info": false, 00:11:22.035 "zone_management": false, 00:11:22.035 "zone_append": false, 00:11:22.035 "compare": false, 00:11:22.035 "compare_and_write": false, 00:11:22.035 "abort": true, 00:11:22.035 "seek_hole": false, 00:11:22.035 "seek_data": false, 00:11:22.035 "copy": true, 00:11:22.035 "nvme_iov_md": false 00:11:22.035 }, 00:11:22.035 "memory_domains": [ 00:11:22.035 { 00:11:22.035 "dma_device_id": "system", 00:11:22.035 "dma_device_type": 1 00:11:22.035 }, 00:11:22.035 { 00:11:22.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.035 "dma_device_type": 2 00:11:22.035 } 00:11:22.035 ], 00:11:22.035 "driver_specific": { 00:11:22.035 "passthru": { 00:11:22.035 "name": "Passthru0", 00:11:22.035 "base_bdev_name": "Malloc0" 00:11:22.035 } 00:11:22.035 } 00:11:22.035 } 00:11:22.035 ]' 00:11:22.035 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:22.035 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:22.035 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.035 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.035 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.035 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:22.035 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:22.035 ************************************ 00:11:22.035 END TEST rpc_integrity 00:11:22.035 ************************************ 00:11:22.035 12:42:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:22.035 00:11:22.035 real 0m0.224s 00:11:22.035 user 0m0.127s 00:11:22.035 sys 0m0.030s 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.035 12:42:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 12:42:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:22.035 12:42:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.035 12:42:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.035 12:42:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 ************************************ 00:11:22.035 START TEST rpc_plugins 00:11:22.035 ************************************ 00:11:22.035 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:11:22.035 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:22.035 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.035 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.035 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:22.035 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:22.035 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.035 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:22.035 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.035 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:22.035 { 00:11:22.035 "name": "Malloc1", 00:11:22.035 "aliases": [ 00:11:22.035 "da6e9046-3b30-4b8a-8e1b-cb9b91c6633e" 00:11:22.035 ], 00:11:22.035 "product_name": "Malloc disk", 00:11:22.035 "block_size": 4096, 00:11:22.035 "num_blocks": 256, 00:11:22.035 "uuid": "da6e9046-3b30-4b8a-8e1b-cb9b91c6633e", 00:11:22.035 "assigned_rate_limits": { 00:11:22.035 "rw_ios_per_sec": 0, 00:11:22.035 "rw_mbytes_per_sec": 0, 00:11:22.035 "r_mbytes_per_sec": 0, 00:11:22.035 "w_mbytes_per_sec": 0 00:11:22.035 }, 00:11:22.035 "claimed": false, 00:11:22.035 "zoned": false, 00:11:22.035 "supported_io_types": { 00:11:22.035 "read": true, 00:11:22.035 "write": true, 00:11:22.035 "unmap": true, 00:11:22.035 "flush": true, 00:11:22.035 "reset": true, 00:11:22.035 "nvme_admin": false, 00:11:22.035 "nvme_io": false, 00:11:22.035 "nvme_io_md": false, 00:11:22.035 "write_zeroes": true, 00:11:22.035 "zcopy": true, 00:11:22.035 "get_zone_info": false, 00:11:22.035 "zone_management": false, 00:11:22.035 "zone_append": false, 00:11:22.035 "compare": false, 00:11:22.035 "compare_and_write": false, 00:11:22.035 "abort": true, 00:11:22.035 "seek_hole": false, 00:11:22.035 "seek_data": false, 00:11:22.035 "copy": true, 00:11:22.035 "nvme_iov_md": false 00:11:22.035 }, 00:11:22.035 "memory_domains": [ 00:11:22.035 { 00:11:22.036 "dma_device_id": "system", 00:11:22.036 "dma_device_type": 1 00:11:22.036 }, 00:11:22.036 { 00:11:22.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.036 "dma_device_type": 2 00:11:22.036 } 00:11:22.036 ], 00:11:22.036 "driver_specific": {} 00:11:22.036 } 00:11:22.036 ]' 00:11:22.036 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:22.036 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:22.036 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:22.036 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.036 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:22.036 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.036 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:22.036 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.036 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:22.293 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.293 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:22.293 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:22.294 ************************************ 00:11:22.294 END TEST rpc_plugins 00:11:22.294 ************************************ 00:11:22.294 12:42:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:22.294 00:11:22.294 real 0m0.125s 00:11:22.294 user 0m0.074s 00:11:22.294 sys 0m0.017s 00:11:22.294 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.294 12:42:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:22.294 12:42:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:22.294 12:42:21 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.294 12:42:21 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.294 12:42:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.294 ************************************ 00:11:22.294 START TEST rpc_trace_cmd_test 00:11:22.294 ************************************ 00:11:22.294 12:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:11:22.294 12:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:22.294 12:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:22.294 12:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.294 12:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.294 12:42:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.294 12:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:22.294 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69582", 00:11:22.294 "tpoint_group_mask": "0x8", 00:11:22.294 "iscsi_conn": { 00:11:22.294 "mask": "0x2", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "scsi": { 00:11:22.294 "mask": "0x4", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "bdev": { 00:11:22.294 "mask": "0x8", 00:11:22.294 "tpoint_mask": "0xffffffffffffffff" 00:11:22.294 }, 00:11:22.294 "nvmf_rdma": { 00:11:22.294 "mask": "0x10", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "nvmf_tcp": { 00:11:22.294 "mask": "0x20", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "ftl": { 00:11:22.294 "mask": "0x40", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "blobfs": { 00:11:22.294 "mask": "0x80", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "dsa": { 00:11:22.294 "mask": "0x200", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "thread": { 00:11:22.294 "mask": "0x400", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "nvme_pcie": { 00:11:22.294 "mask": "0x800", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "iaa": { 00:11:22.294 "mask": "0x1000", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "nvme_tcp": { 00:11:22.294 "mask": "0x2000", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "bdev_nvme": { 00:11:22.294 "mask": "0x4000", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "sock": { 00:11:22.294 "mask": "0x8000", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "blob": { 00:11:22.294 "mask": "0x10000", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "bdev_raid": { 00:11:22.294 "mask": "0x20000", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 }, 00:11:22.294 "scheduler": { 00:11:22.294 "mask": "0x40000", 00:11:22.294 "tpoint_mask": "0x0" 00:11:22.294 } 00:11:22.294 }' 00:11:22.294 12:42:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:22.294 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:11:22.294 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:22.294 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:22.294 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:22.294 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:22.294 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:22.294 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:22.294 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:22.553 ************************************ 00:11:22.553 END TEST rpc_trace_cmd_test 00:11:22.553 ************************************ 00:11:22.553 12:42:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:22.553 00:11:22.553 real 0m0.194s 00:11:22.553 user 0m0.162s 00:11:22.553 sys 0m0.021s 00:11:22.553 12:42:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.553 12:42:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.553 12:42:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:22.553 12:42:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:22.553 12:42:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:22.553 12:42:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.553 12:42:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.553 12:42:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.553 ************************************ 00:11:22.553 START TEST rpc_daemon_integrity 00:11:22.553 ************************************ 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:22.553 { 00:11:22.553 "name": "Malloc2", 00:11:22.553 "aliases": [ 00:11:22.553 "ff3dd4f9-61c2-4d90-852d-245eb91dbd74" 00:11:22.553 ], 00:11:22.553 "product_name": "Malloc disk", 00:11:22.553 "block_size": 512, 00:11:22.553 "num_blocks": 16384, 00:11:22.553 "uuid": "ff3dd4f9-61c2-4d90-852d-245eb91dbd74", 00:11:22.553 "assigned_rate_limits": { 00:11:22.553 "rw_ios_per_sec": 0, 00:11:22.553 "rw_mbytes_per_sec": 0, 00:11:22.553 "r_mbytes_per_sec": 0, 00:11:22.553 "w_mbytes_per_sec": 0 00:11:22.553 }, 00:11:22.553 "claimed": false, 00:11:22.553 "zoned": false, 00:11:22.553 "supported_io_types": { 00:11:22.553 "read": true, 00:11:22.553 "write": true, 00:11:22.553 "unmap": true, 00:11:22.553 "flush": true, 00:11:22.553 "reset": true, 00:11:22.553 "nvme_admin": false, 00:11:22.553 "nvme_io": false, 00:11:22.553 "nvme_io_md": false, 00:11:22.553 "write_zeroes": true, 00:11:22.553 "zcopy": true, 00:11:22.553 "get_zone_info": false, 00:11:22.553 "zone_management": false, 00:11:22.553 "zone_append": false, 00:11:22.553 "compare": false, 00:11:22.553 "compare_and_write": false, 00:11:22.553 "abort": true, 00:11:22.553 "seek_hole": false, 00:11:22.553 "seek_data": false, 00:11:22.553 "copy": true, 00:11:22.553 "nvme_iov_md": false 00:11:22.553 }, 00:11:22.553 "memory_domains": [ 00:11:22.553 { 00:11:22.553 "dma_device_id": "system", 00:11:22.553 "dma_device_type": 1 00:11:22.553 }, 00:11:22.553 { 00:11:22.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.553 "dma_device_type": 2 00:11:22.553 } 00:11:22.553 ], 00:11:22.553 "driver_specific": {} 00:11:22.553 } 00:11:22.553 ]' 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.553 [2024-12-05 12:42:22.322464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:22.553 [2024-12-05 12:42:22.322538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.553 [2024-12-05 12:42:22.322572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:22.553 [2024-12-05 12:42:22.322582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.553 [2024-12-05 12:42:22.324997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.553 [2024-12-05 12:42:22.325032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:22.553 Passthru0 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.553 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:22.553 { 00:11:22.553 "name": "Malloc2", 00:11:22.553 "aliases": [ 00:11:22.553 "ff3dd4f9-61c2-4d90-852d-245eb91dbd74" 00:11:22.553 ], 00:11:22.553 "product_name": "Malloc disk", 00:11:22.553 "block_size": 512, 00:11:22.553 "num_blocks": 16384, 00:11:22.553 "uuid": "ff3dd4f9-61c2-4d90-852d-245eb91dbd74", 00:11:22.553 "assigned_rate_limits": { 00:11:22.553 "rw_ios_per_sec": 0, 00:11:22.553 "rw_mbytes_per_sec": 0, 00:11:22.553 "r_mbytes_per_sec": 0, 00:11:22.553 "w_mbytes_per_sec": 0 00:11:22.553 }, 00:11:22.553 "claimed": true, 00:11:22.553 "claim_type": "exclusive_write", 00:11:22.553 "zoned": false, 00:11:22.553 "supported_io_types": { 00:11:22.553 "read": true, 00:11:22.553 "write": true, 00:11:22.553 "unmap": true, 00:11:22.553 "flush": true, 00:11:22.553 "reset": true, 00:11:22.553 "nvme_admin": false, 00:11:22.553 "nvme_io": false, 00:11:22.553 "nvme_io_md": false, 00:11:22.553 "write_zeroes": true, 00:11:22.553 "zcopy": true, 00:11:22.553 "get_zone_info": false, 00:11:22.553 "zone_management": false, 00:11:22.553 "zone_append": false, 00:11:22.553 "compare": false, 00:11:22.553 "compare_and_write": false, 00:11:22.553 "abort": true, 00:11:22.553 "seek_hole": false, 00:11:22.553 "seek_data": false, 00:11:22.553 "copy": true, 00:11:22.553 "nvme_iov_md": false 00:11:22.553 }, 00:11:22.553 "memory_domains": [ 00:11:22.553 { 00:11:22.553 "dma_device_id": "system", 00:11:22.553 "dma_device_type": 1 00:11:22.553 }, 00:11:22.553 { 00:11:22.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.553 "dma_device_type": 2 00:11:22.553 } 00:11:22.553 ], 00:11:22.553 "driver_specific": {} 00:11:22.553 }, 00:11:22.553 { 00:11:22.553 "name": "Passthru0", 00:11:22.553 "aliases": [ 00:11:22.553 "a967b0b0-bfd9-5bde-a0b7-363232967786" 00:11:22.553 ], 00:11:22.553 "product_name": "passthru", 00:11:22.553 "block_size": 512, 00:11:22.553 "num_blocks": 16384, 00:11:22.553 "uuid": "a967b0b0-bfd9-5bde-a0b7-363232967786", 00:11:22.553 "assigned_rate_limits": { 00:11:22.553 "rw_ios_per_sec": 0, 00:11:22.553 "rw_mbytes_per_sec": 0, 00:11:22.553 "r_mbytes_per_sec": 0, 00:11:22.553 "w_mbytes_per_sec": 0 00:11:22.553 }, 00:11:22.553 "claimed": false, 00:11:22.553 "zoned": false, 00:11:22.553 "supported_io_types": { 00:11:22.553 "read": true, 00:11:22.553 "write": true, 00:11:22.553 "unmap": true, 00:11:22.553 "flush": true, 00:11:22.553 "reset": true, 00:11:22.553 "nvme_admin": false, 00:11:22.553 "nvme_io": false, 00:11:22.553 "nvme_io_md": false, 00:11:22.553 "write_zeroes": true, 00:11:22.553 "zcopy": true, 00:11:22.553 "get_zone_info": false, 00:11:22.553 "zone_management": false, 00:11:22.553 "zone_append": false, 00:11:22.553 "compare": false, 00:11:22.553 "compare_and_write": false, 00:11:22.553 "abort": true, 00:11:22.553 "seek_hole": false, 00:11:22.553 "seek_data": false, 00:11:22.553 "copy": true, 00:11:22.553 "nvme_iov_md": false 00:11:22.554 }, 00:11:22.554 "memory_domains": [ 00:11:22.554 { 00:11:22.554 "dma_device_id": "system", 00:11:22.554 "dma_device_type": 1 00:11:22.554 }, 00:11:22.554 { 00:11:22.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.554 "dma_device_type": 2 00:11:22.554 } 00:11:22.554 ], 00:11:22.554 "driver_specific": { 00:11:22.554 "passthru": { 00:11:22.554 "name": "Passthru0", 00:11:22.554 "base_bdev_name": "Malloc2" 00:11:22.554 } 00:11:22.554 } 00:11:22.554 } 00:11:22.554 ]' 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.554 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.811 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.811 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:22.811 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:22.811 ************************************ 00:11:22.811 END TEST rpc_daemon_integrity 00:11:22.811 ************************************ 00:11:22.811 12:42:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:22.811 00:11:22.811 real 0m0.229s 00:11:22.811 user 0m0.133s 00:11:22.811 sys 0m0.030s 00:11:22.811 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.811 12:42:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:22.811 12:42:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:22.811 12:42:22 rpc -- rpc/rpc.sh@84 -- # killprocess 69582 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@954 -- # '[' -z 69582 ']' 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@958 -- # kill -0 69582 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@959 -- # uname 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69582 00:11:22.811 killing process with pid 69582 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69582' 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@973 -- # kill 69582 00:11:22.811 12:42:22 rpc -- common/autotest_common.sh@978 -- # wait 69582 00:11:23.069 00:11:23.069 real 0m2.433s 00:11:23.069 user 0m2.961s 00:11:23.069 sys 0m0.604s 00:11:23.069 ************************************ 00:11:23.069 END TEST rpc 00:11:23.069 ************************************ 00:11:23.069 12:42:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.069 12:42:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.069 12:42:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:23.069 12:42:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.069 12:42:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.069 12:42:22 -- common/autotest_common.sh@10 -- # set +x 00:11:23.069 ************************************ 00:11:23.069 START TEST skip_rpc 00:11:23.069 ************************************ 00:11:23.069 12:42:22 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:23.328 * Looking for test storage... 00:11:23.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:23.328 12:42:22 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:23.328 12:42:22 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:23.328 12:42:22 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:23.328 12:42:23 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.328 12:42:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:11:23.328 12:42:23 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.328 12:42:23 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:23.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.328 --rc genhtml_branch_coverage=1 00:11:23.328 --rc genhtml_function_coverage=1 00:11:23.328 --rc genhtml_legend=1 00:11:23.328 --rc geninfo_all_blocks=1 00:11:23.328 --rc geninfo_unexecuted_blocks=1 00:11:23.328 00:11:23.328 ' 00:11:23.328 12:42:23 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:23.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.328 --rc genhtml_branch_coverage=1 00:11:23.328 --rc genhtml_function_coverage=1 00:11:23.328 --rc genhtml_legend=1 00:11:23.328 --rc geninfo_all_blocks=1 00:11:23.328 --rc geninfo_unexecuted_blocks=1 00:11:23.328 00:11:23.328 ' 00:11:23.328 12:42:23 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:23.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.328 --rc genhtml_branch_coverage=1 00:11:23.328 --rc genhtml_function_coverage=1 00:11:23.328 --rc genhtml_legend=1 00:11:23.328 --rc geninfo_all_blocks=1 00:11:23.328 --rc geninfo_unexecuted_blocks=1 00:11:23.328 00:11:23.328 ' 00:11:23.328 12:42:23 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:23.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.328 --rc genhtml_branch_coverage=1 00:11:23.328 --rc genhtml_function_coverage=1 00:11:23.328 --rc genhtml_legend=1 00:11:23.328 --rc geninfo_all_blocks=1 00:11:23.328 --rc geninfo_unexecuted_blocks=1 00:11:23.328 00:11:23.328 ' 00:11:23.328 12:42:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:23.329 12:42:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:23.329 12:42:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:23.329 12:42:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.329 12:42:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.329 12:42:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.329 ************************************ 00:11:23.329 START TEST skip_rpc 00:11:23.329 ************************************ 00:11:23.329 12:42:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:11:23.329 12:42:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69783 00:11:23.329 12:42:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:23.329 12:42:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:23.329 12:42:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:23.329 [2024-12-05 12:42:23.109153] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:23.329 [2024-12-05 12:42:23.109518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69783 ] 00:11:23.587 [2024-12-05 12:42:23.267792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.587 [2024-12-05 12:42:23.292431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69783 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69783 ']' 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69783 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69783 00:11:28.841 killing process with pid 69783 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69783' 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69783 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69783 00:11:28.841 00:11:28.841 real 0m5.352s 00:11:28.841 user 0m4.945s 00:11:28.841 sys 0m0.300s 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.841 12:42:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.841 ************************************ 00:11:28.841 END TEST skip_rpc 00:11:28.841 ************************************ 00:11:28.841 12:42:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:28.841 12:42:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.841 12:42:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.841 12:42:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.841 ************************************ 00:11:28.841 START TEST skip_rpc_with_json 00:11:28.841 ************************************ 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69871 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69871 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:28.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69871 ']' 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.841 12:42:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:28.841 [2024-12-05 12:42:28.509207] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:28.841 [2024-12-05 12:42:28.509351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69871 ] 00:11:28.841 [2024-12-05 12:42:28.663732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.117 [2024-12-05 12:42:28.690242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:29.683 [2024-12-05 12:42:29.372887] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:29.683 request: 00:11:29.683 { 00:11:29.683 "trtype": "tcp", 00:11:29.683 "method": "nvmf_get_transports", 00:11:29.683 "req_id": 1 00:11:29.683 } 00:11:29.683 Got JSON-RPC error response 00:11:29.683 response: 00:11:29.683 { 00:11:29.683 "code": -19, 00:11:29.683 "message": "No such device" 00:11:29.683 } 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:29.683 [2024-12-05 12:42:29.380983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.683 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:29.942 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.942 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:29.942 { 00:11:29.942 "subsystems": [ 00:11:29.942 { 00:11:29.942 "subsystem": "fsdev", 00:11:29.942 "config": [ 00:11:29.942 { 00:11:29.942 "method": "fsdev_set_opts", 00:11:29.942 "params": { 00:11:29.942 "fsdev_io_pool_size": 65535, 00:11:29.942 "fsdev_io_cache_size": 256 00:11:29.942 } 00:11:29.942 } 00:11:29.942 ] 00:11:29.942 }, 00:11:29.942 { 00:11:29.942 "subsystem": "keyring", 00:11:29.942 "config": [] 00:11:29.942 }, 00:11:29.942 { 00:11:29.942 "subsystem": "iobuf", 00:11:29.942 "config": [ 00:11:29.942 { 00:11:29.942 "method": "iobuf_set_options", 00:11:29.942 "params": { 00:11:29.942 "small_pool_count": 8192, 00:11:29.942 "large_pool_count": 1024, 00:11:29.942 "small_bufsize": 8192, 00:11:29.942 "large_bufsize": 135168, 00:11:29.942 "enable_numa": false 00:11:29.942 } 00:11:29.942 } 00:11:29.942 ] 00:11:29.942 }, 00:11:29.942 { 00:11:29.942 "subsystem": "sock", 00:11:29.942 "config": [ 00:11:29.942 { 00:11:29.942 "method": "sock_set_default_impl", 00:11:29.942 "params": { 00:11:29.942 "impl_name": "posix" 00:11:29.942 } 00:11:29.942 }, 00:11:29.942 { 00:11:29.942 "method": "sock_impl_set_options", 00:11:29.942 "params": { 00:11:29.942 "impl_name": "ssl", 00:11:29.942 "recv_buf_size": 4096, 00:11:29.942 "send_buf_size": 4096, 00:11:29.942 "enable_recv_pipe": true, 00:11:29.942 "enable_quickack": false, 00:11:29.942 "enable_placement_id": 0, 00:11:29.942 "enable_zerocopy_send_server": true, 00:11:29.942 "enable_zerocopy_send_client": false, 00:11:29.942 "zerocopy_threshold": 0, 00:11:29.942 "tls_version": 0, 00:11:29.942 "enable_ktls": false 00:11:29.942 } 00:11:29.942 }, 00:11:29.942 { 00:11:29.942 "method": "sock_impl_set_options", 00:11:29.942 "params": { 00:11:29.942 "impl_name": "posix", 00:11:29.942 "recv_buf_size": 2097152, 00:11:29.942 "send_buf_size": 2097152, 00:11:29.942 "enable_recv_pipe": true, 00:11:29.942 "enable_quickack": false, 00:11:29.942 "enable_placement_id": 0, 00:11:29.942 "enable_zerocopy_send_server": true, 00:11:29.942 "enable_zerocopy_send_client": false, 00:11:29.942 "zerocopy_threshold": 0, 00:11:29.943 "tls_version": 0, 00:11:29.943 "enable_ktls": false 00:11:29.943 } 00:11:29.943 } 00:11:29.943 ] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "vmd", 00:11:29.943 "config": [] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "accel", 00:11:29.943 "config": [ 00:11:29.943 { 00:11:29.943 "method": "accel_set_options", 00:11:29.943 "params": { 00:11:29.943 "small_cache_size": 128, 00:11:29.943 "large_cache_size": 16, 00:11:29.943 "task_count": 2048, 00:11:29.943 "sequence_count": 2048, 00:11:29.943 "buf_count": 2048 00:11:29.943 } 00:11:29.943 } 00:11:29.943 ] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "bdev", 00:11:29.943 "config": [ 00:11:29.943 { 00:11:29.943 "method": "bdev_set_options", 00:11:29.943 "params": { 00:11:29.943 "bdev_io_pool_size": 65535, 00:11:29.943 "bdev_io_cache_size": 256, 00:11:29.943 "bdev_auto_examine": true, 00:11:29.943 "iobuf_small_cache_size": 128, 00:11:29.943 "iobuf_large_cache_size": 16 00:11:29.943 } 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "method": "bdev_raid_set_options", 00:11:29.943 "params": { 00:11:29.943 "process_window_size_kb": 1024, 00:11:29.943 "process_max_bandwidth_mb_sec": 0 00:11:29.943 } 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "method": "bdev_iscsi_set_options", 00:11:29.943 "params": { 00:11:29.943 "timeout_sec": 30 00:11:29.943 } 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "method": "bdev_nvme_set_options", 00:11:29.943 "params": { 00:11:29.943 "action_on_timeout": "none", 00:11:29.943 "timeout_us": 0, 00:11:29.943 "timeout_admin_us": 0, 00:11:29.943 "keep_alive_timeout_ms": 10000, 00:11:29.943 "arbitration_burst": 0, 00:11:29.943 "low_priority_weight": 0, 00:11:29.943 "medium_priority_weight": 0, 00:11:29.943 "high_priority_weight": 0, 00:11:29.943 "nvme_adminq_poll_period_us": 10000, 00:11:29.943 "nvme_ioq_poll_period_us": 0, 00:11:29.943 "io_queue_requests": 0, 00:11:29.943 "delay_cmd_submit": true, 00:11:29.943 "transport_retry_count": 4, 00:11:29.943 "bdev_retry_count": 3, 00:11:29.943 "transport_ack_timeout": 0, 00:11:29.943 "ctrlr_loss_timeout_sec": 0, 00:11:29.943 "reconnect_delay_sec": 0, 00:11:29.943 "fast_io_fail_timeout_sec": 0, 00:11:29.943 "disable_auto_failback": false, 00:11:29.943 "generate_uuids": false, 00:11:29.943 "transport_tos": 0, 00:11:29.943 "nvme_error_stat": false, 00:11:29.943 "rdma_srq_size": 0, 00:11:29.943 "io_path_stat": false, 00:11:29.943 "allow_accel_sequence": false, 00:11:29.943 "rdma_max_cq_size": 0, 00:11:29.943 "rdma_cm_event_timeout_ms": 0, 00:11:29.943 "dhchap_digests": [ 00:11:29.943 "sha256", 00:11:29.943 "sha384", 00:11:29.943 "sha512" 00:11:29.943 ], 00:11:29.943 "dhchap_dhgroups": [ 00:11:29.943 "null", 00:11:29.943 "ffdhe2048", 00:11:29.943 "ffdhe3072", 00:11:29.943 "ffdhe4096", 00:11:29.943 "ffdhe6144", 00:11:29.943 "ffdhe8192" 00:11:29.943 ] 00:11:29.943 } 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "method": "bdev_nvme_set_hotplug", 00:11:29.943 "params": { 00:11:29.943 "period_us": 100000, 00:11:29.943 "enable": false 00:11:29.943 } 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "method": "bdev_wait_for_examine" 00:11:29.943 } 00:11:29.943 ] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "scsi", 00:11:29.943 "config": null 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "scheduler", 00:11:29.943 "config": [ 00:11:29.943 { 00:11:29.943 "method": "framework_set_scheduler", 00:11:29.943 "params": { 00:11:29.943 "name": "static" 00:11:29.943 } 00:11:29.943 } 00:11:29.943 ] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "vhost_scsi", 00:11:29.943 "config": [] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "vhost_blk", 00:11:29.943 "config": [] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "ublk", 00:11:29.943 "config": [] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "nbd", 00:11:29.943 "config": [] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "nvmf", 00:11:29.943 "config": [ 00:11:29.943 { 00:11:29.943 "method": "nvmf_set_config", 00:11:29.943 "params": { 00:11:29.943 "discovery_filter": "match_any", 00:11:29.943 "admin_cmd_passthru": { 00:11:29.943 "identify_ctrlr": false 00:11:29.943 }, 00:11:29.943 "dhchap_digests": [ 00:11:29.943 "sha256", 00:11:29.943 "sha384", 00:11:29.943 "sha512" 00:11:29.943 ], 00:11:29.943 "dhchap_dhgroups": [ 00:11:29.943 "null", 00:11:29.943 "ffdhe2048", 00:11:29.943 "ffdhe3072", 00:11:29.943 "ffdhe4096", 00:11:29.943 "ffdhe6144", 00:11:29.943 "ffdhe8192" 00:11:29.943 ] 00:11:29.943 } 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "method": "nvmf_set_max_subsystems", 00:11:29.943 "params": { 00:11:29.943 "max_subsystems": 1024 00:11:29.943 } 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "method": "nvmf_set_crdt", 00:11:29.943 "params": { 00:11:29.943 "crdt1": 0, 00:11:29.943 "crdt2": 0, 00:11:29.943 "crdt3": 0 00:11:29.943 } 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "method": "nvmf_create_transport", 00:11:29.943 "params": { 00:11:29.943 "trtype": "TCP", 00:11:29.943 "max_queue_depth": 128, 00:11:29.943 "max_io_qpairs_per_ctrlr": 127, 00:11:29.943 "in_capsule_data_size": 4096, 00:11:29.943 "max_io_size": 131072, 00:11:29.943 "io_unit_size": 131072, 00:11:29.943 "max_aq_depth": 128, 00:11:29.943 "num_shared_buffers": 511, 00:11:29.943 "buf_cache_size": 4294967295, 00:11:29.943 "dif_insert_or_strip": false, 00:11:29.943 "zcopy": false, 00:11:29.943 "c2h_success": true, 00:11:29.943 "sock_priority": 0, 00:11:29.943 "abort_timeout_sec": 1, 00:11:29.943 "ack_timeout": 0, 00:11:29.943 "data_wr_pool_size": 0 00:11:29.943 } 00:11:29.943 } 00:11:29.943 ] 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "subsystem": "iscsi", 00:11:29.943 "config": [ 00:11:29.943 { 00:11:29.943 "method": "iscsi_set_options", 00:11:29.943 "params": { 00:11:29.943 "node_base": "iqn.2016-06.io.spdk", 00:11:29.943 "max_sessions": 128, 00:11:29.943 "max_connections_per_session": 2, 00:11:29.943 "max_queue_depth": 64, 00:11:29.943 "default_time2wait": 2, 00:11:29.943 "default_time2retain": 20, 00:11:29.943 "first_burst_length": 8192, 00:11:29.943 "immediate_data": true, 00:11:29.943 "allow_duplicated_isid": false, 00:11:29.943 "error_recovery_level": 0, 00:11:29.943 "nop_timeout": 60, 00:11:29.943 "nop_in_interval": 30, 00:11:29.943 "disable_chap": false, 00:11:29.943 "require_chap": false, 00:11:29.943 "mutual_chap": false, 00:11:29.943 "chap_group": 0, 00:11:29.943 "max_large_datain_per_connection": 64, 00:11:29.943 "max_r2t_per_connection": 4, 00:11:29.943 "pdu_pool_size": 36864, 00:11:29.943 "immediate_data_pool_size": 16384, 00:11:29.943 "data_out_pool_size": 2048 00:11:29.943 } 00:11:29.943 } 00:11:29.943 ] 00:11:29.943 } 00:11:29.943 ] 00:11:29.943 } 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69871 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69871 ']' 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69871 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69871 00:11:29.943 killing process with pid 69871 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69871' 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69871 00:11:29.943 12:42:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69871 00:11:30.201 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69894 00:11:30.201 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:30.201 12:42:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69894 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69894 ']' 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69894 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69894 00:11:35.477 killing process with pid 69894 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69894' 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69894 00:11:35.477 12:42:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69894 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:35.477 00:11:35.477 real 0m6.826s 00:11:35.477 user 0m6.399s 00:11:35.477 sys 0m0.660s 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:35.477 ************************************ 00:11:35.477 END TEST skip_rpc_with_json 00:11:35.477 ************************************ 00:11:35.477 12:42:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:35.477 12:42:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.477 12:42:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.477 12:42:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.477 ************************************ 00:11:35.477 START TEST skip_rpc_with_delay 00:11:35.477 ************************************ 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:35.477 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:35.734 [2024-12-05 12:42:35.390996] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:35.734 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:11:35.734 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.734 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.734 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.734 00:11:35.734 real 0m0.129s 00:11:35.734 user 0m0.069s 00:11:35.734 sys 0m0.058s 00:11:35.734 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.734 12:42:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:35.734 ************************************ 00:11:35.734 END TEST skip_rpc_with_delay 00:11:35.734 ************************************ 00:11:35.734 12:42:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:35.734 12:42:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:35.734 12:42:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:35.734 12:42:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:35.734 12:42:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.734 12:42:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.734 ************************************ 00:11:35.734 START TEST exit_on_failed_rpc_init 00:11:35.734 ************************************ 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:11:35.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70005 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70005 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 70005 ']' 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.734 12:42:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:35.734 [2024-12-05 12:42:35.573439] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:35.734 [2024-12-05 12:42:35.573579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70005 ] 00:11:35.991 [2024-12-05 12:42:35.733794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.991 [2024-12-05 12:42:35.760222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:36.580 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:36.837 [2024-12-05 12:42:36.487164] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:36.837 [2024-12-05 12:42:36.487287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70023 ] 00:11:36.837 [2024-12-05 12:42:36.641554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.837 [2024-12-05 12:42:36.668403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.837 [2024-12-05 12:42:36.668510] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:36.837 [2024-12-05 12:42:36.668530] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:36.837 [2024-12-05 12:42:36.668540] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70005 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 70005 ']' 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 70005 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70005 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.094 killing process with pid 70005 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70005' 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 70005 00:11:37.094 12:42:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 70005 00:11:37.368 ************************************ 00:11:37.368 END TEST exit_on_failed_rpc_init 00:11:37.368 ************************************ 00:11:37.368 00:11:37.368 real 0m1.608s 00:11:37.368 user 0m1.727s 00:11:37.368 sys 0m0.426s 00:11:37.368 12:42:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.368 12:42:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:37.368 12:42:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:37.368 ************************************ 00:11:37.368 END TEST skip_rpc 00:11:37.368 ************************************ 00:11:37.368 00:11:37.368 real 0m14.271s 00:11:37.368 user 0m13.290s 00:11:37.368 sys 0m1.622s 00:11:37.368 12:42:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.368 12:42:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.368 12:42:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:37.368 12:42:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:37.368 12:42:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.368 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:37.368 ************************************ 00:11:37.368 START TEST rpc_client 00:11:37.368 ************************************ 00:11:37.368 12:42:37 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:37.625 * Looking for test storage... 00:11:37.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.625 12:42:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:37.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.625 --rc genhtml_branch_coverage=1 00:11:37.625 --rc genhtml_function_coverage=1 00:11:37.625 --rc genhtml_legend=1 00:11:37.625 --rc geninfo_all_blocks=1 00:11:37.625 --rc geninfo_unexecuted_blocks=1 00:11:37.625 00:11:37.625 ' 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:37.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.625 --rc genhtml_branch_coverage=1 00:11:37.625 --rc genhtml_function_coverage=1 00:11:37.625 --rc genhtml_legend=1 00:11:37.625 --rc geninfo_all_blocks=1 00:11:37.625 --rc geninfo_unexecuted_blocks=1 00:11:37.625 00:11:37.625 ' 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:37.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.625 --rc genhtml_branch_coverage=1 00:11:37.625 --rc genhtml_function_coverage=1 00:11:37.625 --rc genhtml_legend=1 00:11:37.625 --rc geninfo_all_blocks=1 00:11:37.625 --rc geninfo_unexecuted_blocks=1 00:11:37.625 00:11:37.625 ' 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:37.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.625 --rc genhtml_branch_coverage=1 00:11:37.625 --rc genhtml_function_coverage=1 00:11:37.625 --rc genhtml_legend=1 00:11:37.625 --rc geninfo_all_blocks=1 00:11:37.625 --rc geninfo_unexecuted_blocks=1 00:11:37.625 00:11:37.625 ' 00:11:37.625 12:42:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:37.625 OK 00:11:37.625 12:42:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:37.625 00:11:37.625 real 0m0.187s 00:11:37.625 user 0m0.115s 00:11:37.625 sys 0m0.083s 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.625 12:42:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:37.625 ************************************ 00:11:37.625 END TEST rpc_client 00:11:37.625 ************************************ 00:11:37.625 12:42:37 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:37.625 12:42:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:37.625 12:42:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.625 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:37.625 ************************************ 00:11:37.625 START TEST json_config 00:11:37.625 ************************************ 00:11:37.625 12:42:37 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:37.625 12:42:37 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:37.625 12:42:37 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:11:37.625 12:42:37 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:37.883 12:42:37 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:37.883 12:42:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.883 12:42:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.883 12:42:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.883 12:42:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.883 12:42:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.884 12:42:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.884 12:42:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.884 12:42:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.884 12:42:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.884 12:42:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.884 12:42:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.884 12:42:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:11:37.884 12:42:37 json_config -- scripts/common.sh@345 -- # : 1 00:11:37.884 12:42:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.884 12:42:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.884 12:42:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:11:37.884 12:42:37 json_config -- scripts/common.sh@353 -- # local d=1 00:11:37.884 12:42:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.884 12:42:37 json_config -- scripts/common.sh@355 -- # echo 1 00:11:37.884 12:42:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.884 12:42:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:11:37.884 12:42:37 json_config -- scripts/common.sh@353 -- # local d=2 00:11:37.884 12:42:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.884 12:42:37 json_config -- scripts/common.sh@355 -- # echo 2 00:11:37.884 12:42:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.884 12:42:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.884 12:42:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.884 12:42:37 json_config -- scripts/common.sh@368 -- # return 0 00:11:37.884 12:42:37 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.884 12:42:37 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:37.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.884 --rc genhtml_branch_coverage=1 00:11:37.884 --rc genhtml_function_coverage=1 00:11:37.884 --rc genhtml_legend=1 00:11:37.884 --rc geninfo_all_blocks=1 00:11:37.884 --rc geninfo_unexecuted_blocks=1 00:11:37.884 00:11:37.884 ' 00:11:37.884 12:42:37 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:37.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.884 --rc genhtml_branch_coverage=1 00:11:37.884 --rc genhtml_function_coverage=1 00:11:37.884 --rc genhtml_legend=1 00:11:37.884 --rc geninfo_all_blocks=1 00:11:37.884 --rc geninfo_unexecuted_blocks=1 00:11:37.884 00:11:37.884 ' 00:11:37.884 12:42:37 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:37.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.884 --rc genhtml_branch_coverage=1 00:11:37.884 --rc genhtml_function_coverage=1 00:11:37.884 --rc genhtml_legend=1 00:11:37.884 --rc geninfo_all_blocks=1 00:11:37.884 --rc geninfo_unexecuted_blocks=1 00:11:37.884 00:11:37.884 ' 00:11:37.884 12:42:37 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:37.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.884 --rc genhtml_branch_coverage=1 00:11:37.884 --rc genhtml_function_coverage=1 00:11:37.884 --rc genhtml_legend=1 00:11:37.884 --rc geninfo_all_blocks=1 00:11:37.884 --rc geninfo_unexecuted_blocks=1 00:11:37.884 00:11:37.884 ' 00:11:37.884 12:42:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f3c36401-3e16-41f5-8c92-eb65c167cf60 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f3c36401-3e16-41f5-8c92-eb65c167cf60 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.884 12:42:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.884 12:42:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.884 12:42:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.884 12:42:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.884 12:42:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.884 12:42:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.884 12:42:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.884 12:42:37 json_config -- paths/export.sh@5 -- # export PATH 00:11:37.884 12:42:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@51 -- # : 0 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.884 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.884 12:42:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.884 12:42:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:37.884 12:42:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:37.884 12:42:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:37.884 12:42:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:37.884 12:42:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:37.884 12:42:37 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:11:37.884 WARNING: No tests are enabled so not running JSON configuration tests 00:11:37.884 12:42:37 json_config -- json_config/json_config.sh@28 -- # exit 0 00:11:37.884 00:11:37.884 real 0m0.138s 00:11:37.884 user 0m0.090s 00:11:37.884 sys 0m0.051s 00:11:37.884 12:42:37 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.884 12:42:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:37.884 ************************************ 00:11:37.884 END TEST json_config 00:11:37.884 ************************************ 00:11:37.884 12:42:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:37.884 12:42:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:37.884 12:42:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.884 12:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:37.884 ************************************ 00:11:37.884 START TEST json_config_extra_key 00:11:37.884 ************************************ 00:11:37.884 12:42:37 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:37.884 12:42:37 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:37.884 12:42:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:11:37.884 12:42:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:37.884 12:42:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.884 12:42:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:11:37.884 12:42:37 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.884 12:42:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:37.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.884 --rc genhtml_branch_coverage=1 00:11:37.884 --rc genhtml_function_coverage=1 00:11:37.884 --rc genhtml_legend=1 00:11:37.884 --rc geninfo_all_blocks=1 00:11:37.884 --rc geninfo_unexecuted_blocks=1 00:11:37.884 00:11:37.884 ' 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:37.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.885 --rc genhtml_branch_coverage=1 00:11:37.885 --rc genhtml_function_coverage=1 00:11:37.885 --rc genhtml_legend=1 00:11:37.885 --rc geninfo_all_blocks=1 00:11:37.885 --rc geninfo_unexecuted_blocks=1 00:11:37.885 00:11:37.885 ' 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:37.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.885 --rc genhtml_branch_coverage=1 00:11:37.885 --rc genhtml_function_coverage=1 00:11:37.885 --rc genhtml_legend=1 00:11:37.885 --rc geninfo_all_blocks=1 00:11:37.885 --rc geninfo_unexecuted_blocks=1 00:11:37.885 00:11:37.885 ' 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:37.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.885 --rc genhtml_branch_coverage=1 00:11:37.885 --rc genhtml_function_coverage=1 00:11:37.885 --rc genhtml_legend=1 00:11:37.885 --rc geninfo_all_blocks=1 00:11:37.885 --rc geninfo_unexecuted_blocks=1 00:11:37.885 00:11:37.885 ' 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f3c36401-3e16-41f5-8c92-eb65c167cf60 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f3c36401-3e16-41f5-8c92-eb65c167cf60 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.885 12:42:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.885 12:42:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.885 12:42:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.885 12:42:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.885 12:42:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.885 12:42:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.885 12:42:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.885 12:42:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:37.885 12:42:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.885 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.885 12:42:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:37.885 INFO: launching applications... 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:37.885 12:42:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70206 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:37.885 Waiting for target to run... 00:11:37.885 12:42:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70206 /var/tmp/spdk_tgt.sock 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70206 ']' 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:37.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.885 12:42:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:38.143 12:42:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:38.143 [2024-12-05 12:42:37.810853] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:38.143 [2024-12-05 12:42:37.811156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70206 ] 00:11:38.400 [2024-12-05 12:42:38.190947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.400 [2024-12-05 12:42:38.206913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.964 00:11:38.964 INFO: shutting down applications... 00:11:38.964 12:42:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.964 12:42:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:38.964 12:42:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:38.964 12:42:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70206 ]] 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70206 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70206 00:11:38.964 12:42:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:39.528 12:42:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:39.528 12:42:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:39.528 12:42:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70206 00:11:39.528 12:42:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:39.528 12:42:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:39.528 12:42:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:39.528 SPDK target shutdown done 00:11:39.528 Success 00:11:39.528 12:42:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:39.528 12:42:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:39.528 00:11:39.528 real 0m1.576s 00:11:39.528 user 0m1.312s 00:11:39.528 sys 0m0.428s 00:11:39.528 ************************************ 00:11:39.528 END TEST json_config_extra_key 00:11:39.528 ************************************ 00:11:39.528 12:42:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.528 12:42:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:39.528 12:42:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:39.528 12:42:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.528 12:42:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.528 12:42:39 -- common/autotest_common.sh@10 -- # set +x 00:11:39.528 ************************************ 00:11:39.528 START TEST alias_rpc 00:11:39.528 ************************************ 00:11:39.528 12:42:39 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:39.528 * Looking for test storage... 00:11:39.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:39.528 12:42:39 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:39.528 12:42:39 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:39.528 12:42:39 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:39.528 12:42:39 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:11:39.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.528 12:42:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:11:39.528 12:42:39 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.528 12:42:39 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.528 --rc genhtml_branch_coverage=1 00:11:39.528 --rc genhtml_function_coverage=1 00:11:39.528 --rc genhtml_legend=1 00:11:39.528 --rc geninfo_all_blocks=1 00:11:39.528 --rc geninfo_unexecuted_blocks=1 00:11:39.528 00:11:39.528 ' 00:11:39.528 12:42:39 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:39.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.529 --rc genhtml_branch_coverage=1 00:11:39.529 --rc genhtml_function_coverage=1 00:11:39.529 --rc genhtml_legend=1 00:11:39.529 --rc geninfo_all_blocks=1 00:11:39.529 --rc geninfo_unexecuted_blocks=1 00:11:39.529 00:11:39.529 ' 00:11:39.529 12:42:39 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:39.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.529 --rc genhtml_branch_coverage=1 00:11:39.529 --rc genhtml_function_coverage=1 00:11:39.529 --rc genhtml_legend=1 00:11:39.529 --rc geninfo_all_blocks=1 00:11:39.529 --rc geninfo_unexecuted_blocks=1 00:11:39.529 00:11:39.529 ' 00:11:39.529 12:42:39 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:39.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.529 --rc genhtml_branch_coverage=1 00:11:39.529 --rc genhtml_function_coverage=1 00:11:39.529 --rc genhtml_legend=1 00:11:39.529 --rc geninfo_all_blocks=1 00:11:39.529 --rc geninfo_unexecuted_blocks=1 00:11:39.529 00:11:39.529 ' 00:11:39.529 12:42:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:39.529 12:42:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70274 00:11:39.529 12:42:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70274 00:11:39.529 12:42:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:39.529 12:42:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70274 ']' 00:11:39.529 12:42:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.529 12:42:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.529 12:42:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.529 12:42:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.529 12:42:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.784 [2024-12-05 12:42:39.400514] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:39.785 [2024-12-05 12:42:39.400678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70274 ] 00:11:39.785 [2024-12-05 12:42:39.560149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.785 [2024-12-05 12:42:39.586356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:40.713 12:42:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:40.713 12:42:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70274 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70274 ']' 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70274 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70274 00:11:40.713 killing process with pid 70274 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70274' 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@973 -- # kill 70274 00:11:40.713 12:42:40 alias_rpc -- common/autotest_common.sh@978 -- # wait 70274 00:11:41.276 ************************************ 00:11:41.276 END TEST alias_rpc 00:11:41.276 ************************************ 00:11:41.276 00:11:41.276 real 0m1.627s 00:11:41.276 user 0m1.737s 00:11:41.276 sys 0m0.401s 00:11:41.276 12:42:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.276 12:42:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.276 12:42:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:11:41.276 12:42:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:41.276 12:42:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:41.276 12:42:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.276 12:42:40 -- common/autotest_common.sh@10 -- # set +x 00:11:41.276 ************************************ 00:11:41.276 START TEST spdkcli_tcp 00:11:41.276 ************************************ 00:11:41.276 12:42:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:41.276 * Looking for test storage... 00:11:41.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:41.276 12:42:40 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:41.276 12:42:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:41.276 12:42:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:41.276 12:42:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.276 12:42:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:11:41.276 12:42:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.276 12:42:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.276 12:42:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.276 12:42:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:41.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.276 --rc genhtml_branch_coverage=1 00:11:41.276 --rc genhtml_function_coverage=1 00:11:41.276 --rc genhtml_legend=1 00:11:41.276 --rc geninfo_all_blocks=1 00:11:41.276 --rc geninfo_unexecuted_blocks=1 00:11:41.276 00:11:41.276 ' 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:41.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.276 --rc genhtml_branch_coverage=1 00:11:41.276 --rc genhtml_function_coverage=1 00:11:41.276 --rc genhtml_legend=1 00:11:41.276 --rc geninfo_all_blocks=1 00:11:41.276 --rc geninfo_unexecuted_blocks=1 00:11:41.276 00:11:41.276 ' 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:41.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.276 --rc genhtml_branch_coverage=1 00:11:41.276 --rc genhtml_function_coverage=1 00:11:41.276 --rc genhtml_legend=1 00:11:41.276 --rc geninfo_all_blocks=1 00:11:41.276 --rc geninfo_unexecuted_blocks=1 00:11:41.276 00:11:41.276 ' 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:41.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.276 --rc genhtml_branch_coverage=1 00:11:41.276 --rc genhtml_function_coverage=1 00:11:41.276 --rc genhtml_legend=1 00:11:41.276 --rc geninfo_all_blocks=1 00:11:41.276 --rc geninfo_unexecuted_blocks=1 00:11:41.276 00:11:41.276 ' 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70359 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70359 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70359 ']' 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.276 12:42:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.276 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:41.276 [2024-12-05 12:42:41.090238] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:41.276 [2024-12-05 12:42:41.090393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70359 ] 00:11:41.532 [2024-12-05 12:42:41.251408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.532 [2024-12-05 12:42:41.287773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.532 [2024-12-05 12:42:41.287801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.094 12:42:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.094 12:42:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:11:42.094 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70376 00:11:42.094 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:42.094 12:42:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:42.351 [ 00:11:42.351 "bdev_malloc_delete", 00:11:42.351 "bdev_malloc_create", 00:11:42.351 "bdev_null_resize", 00:11:42.351 "bdev_null_delete", 00:11:42.351 "bdev_null_create", 00:11:42.351 "bdev_nvme_cuse_unregister", 00:11:42.351 "bdev_nvme_cuse_register", 00:11:42.351 "bdev_opal_new_user", 00:11:42.351 "bdev_opal_set_lock_state", 00:11:42.351 "bdev_opal_delete", 00:11:42.351 "bdev_opal_get_info", 00:11:42.351 "bdev_opal_create", 00:11:42.351 "bdev_nvme_opal_revert", 00:11:42.351 "bdev_nvme_opal_init", 00:11:42.351 "bdev_nvme_send_cmd", 00:11:42.351 "bdev_nvme_set_keys", 00:11:42.351 "bdev_nvme_get_path_iostat", 00:11:42.351 "bdev_nvme_get_mdns_discovery_info", 00:11:42.351 "bdev_nvme_stop_mdns_discovery", 00:11:42.351 "bdev_nvme_start_mdns_discovery", 00:11:42.351 "bdev_nvme_set_multipath_policy", 00:11:42.351 "bdev_nvme_set_preferred_path", 00:11:42.351 "bdev_nvme_get_io_paths", 00:11:42.351 "bdev_nvme_remove_error_injection", 00:11:42.351 "bdev_nvme_add_error_injection", 00:11:42.351 "bdev_nvme_get_discovery_info", 00:11:42.351 "bdev_nvme_stop_discovery", 00:11:42.351 "bdev_nvme_start_discovery", 00:11:42.351 "bdev_nvme_get_controller_health_info", 00:11:42.351 "bdev_nvme_disable_controller", 00:11:42.351 "bdev_nvme_enable_controller", 00:11:42.351 "bdev_nvme_reset_controller", 00:11:42.351 "bdev_nvme_get_transport_statistics", 00:11:42.351 "bdev_nvme_apply_firmware", 00:11:42.351 "bdev_nvme_detach_controller", 00:11:42.351 "bdev_nvme_get_controllers", 00:11:42.351 "bdev_nvme_attach_controller", 00:11:42.351 "bdev_nvme_set_hotplug", 00:11:42.351 "bdev_nvme_set_options", 00:11:42.351 "bdev_passthru_delete", 00:11:42.351 "bdev_passthru_create", 00:11:42.351 "bdev_lvol_set_parent_bdev", 00:11:42.351 "bdev_lvol_set_parent", 00:11:42.351 "bdev_lvol_check_shallow_copy", 00:11:42.351 "bdev_lvol_start_shallow_copy", 00:11:42.351 "bdev_lvol_grow_lvstore", 00:11:42.351 "bdev_lvol_get_lvols", 00:11:42.351 "bdev_lvol_get_lvstores", 00:11:42.351 "bdev_lvol_delete", 00:11:42.351 "bdev_lvol_set_read_only", 00:11:42.351 "bdev_lvol_resize", 00:11:42.351 "bdev_lvol_decouple_parent", 00:11:42.351 "bdev_lvol_inflate", 00:11:42.351 "bdev_lvol_rename", 00:11:42.351 "bdev_lvol_clone_bdev", 00:11:42.351 "bdev_lvol_clone", 00:11:42.351 "bdev_lvol_snapshot", 00:11:42.351 "bdev_lvol_create", 00:11:42.351 "bdev_lvol_delete_lvstore", 00:11:42.351 "bdev_lvol_rename_lvstore", 00:11:42.351 "bdev_lvol_create_lvstore", 00:11:42.351 "bdev_raid_set_options", 00:11:42.351 "bdev_raid_remove_base_bdev", 00:11:42.351 "bdev_raid_add_base_bdev", 00:11:42.351 "bdev_raid_delete", 00:11:42.351 "bdev_raid_create", 00:11:42.351 "bdev_raid_get_bdevs", 00:11:42.351 "bdev_error_inject_error", 00:11:42.351 "bdev_error_delete", 00:11:42.351 "bdev_error_create", 00:11:42.351 "bdev_split_delete", 00:11:42.351 "bdev_split_create", 00:11:42.351 "bdev_delay_delete", 00:11:42.351 "bdev_delay_create", 00:11:42.351 "bdev_delay_update_latency", 00:11:42.351 "bdev_zone_block_delete", 00:11:42.351 "bdev_zone_block_create", 00:11:42.351 "blobfs_create", 00:11:42.351 "blobfs_detect", 00:11:42.351 "blobfs_set_cache_size", 00:11:42.351 "bdev_xnvme_delete", 00:11:42.351 "bdev_xnvme_create", 00:11:42.351 "bdev_aio_delete", 00:11:42.351 "bdev_aio_rescan", 00:11:42.351 "bdev_aio_create", 00:11:42.351 "bdev_ftl_set_property", 00:11:42.351 "bdev_ftl_get_properties", 00:11:42.351 "bdev_ftl_get_stats", 00:11:42.351 "bdev_ftl_unmap", 00:11:42.351 "bdev_ftl_unload", 00:11:42.351 "bdev_ftl_delete", 00:11:42.351 "bdev_ftl_load", 00:11:42.351 "bdev_ftl_create", 00:11:42.351 "bdev_virtio_attach_controller", 00:11:42.351 "bdev_virtio_scsi_get_devices", 00:11:42.351 "bdev_virtio_detach_controller", 00:11:42.351 "bdev_virtio_blk_set_hotplug", 00:11:42.351 "bdev_iscsi_delete", 00:11:42.351 "bdev_iscsi_create", 00:11:42.351 "bdev_iscsi_set_options", 00:11:42.352 "accel_error_inject_error", 00:11:42.352 "ioat_scan_accel_module", 00:11:42.352 "dsa_scan_accel_module", 00:11:42.352 "iaa_scan_accel_module", 00:11:42.352 "keyring_file_remove_key", 00:11:42.352 "keyring_file_add_key", 00:11:42.352 "keyring_linux_set_options", 00:11:42.352 "fsdev_aio_delete", 00:11:42.352 "fsdev_aio_create", 00:11:42.352 "iscsi_get_histogram", 00:11:42.352 "iscsi_enable_histogram", 00:11:42.352 "iscsi_set_options", 00:11:42.352 "iscsi_get_auth_groups", 00:11:42.352 "iscsi_auth_group_remove_secret", 00:11:42.352 "iscsi_auth_group_add_secret", 00:11:42.352 "iscsi_delete_auth_group", 00:11:42.352 "iscsi_create_auth_group", 00:11:42.352 "iscsi_set_discovery_auth", 00:11:42.352 "iscsi_get_options", 00:11:42.352 "iscsi_target_node_request_logout", 00:11:42.352 "iscsi_target_node_set_redirect", 00:11:42.352 "iscsi_target_node_set_auth", 00:11:42.352 "iscsi_target_node_add_lun", 00:11:42.352 "iscsi_get_stats", 00:11:42.352 "iscsi_get_connections", 00:11:42.352 "iscsi_portal_group_set_auth", 00:11:42.352 "iscsi_start_portal_group", 00:11:42.352 "iscsi_delete_portal_group", 00:11:42.352 "iscsi_create_portal_group", 00:11:42.352 "iscsi_get_portal_groups", 00:11:42.352 "iscsi_delete_target_node", 00:11:42.352 "iscsi_target_node_remove_pg_ig_maps", 00:11:42.352 "iscsi_target_node_add_pg_ig_maps", 00:11:42.352 "iscsi_create_target_node", 00:11:42.352 "iscsi_get_target_nodes", 00:11:42.352 "iscsi_delete_initiator_group", 00:11:42.352 "iscsi_initiator_group_remove_initiators", 00:11:42.352 "iscsi_initiator_group_add_initiators", 00:11:42.352 "iscsi_create_initiator_group", 00:11:42.352 "iscsi_get_initiator_groups", 00:11:42.352 "nvmf_set_crdt", 00:11:42.352 "nvmf_set_config", 00:11:42.352 "nvmf_set_max_subsystems", 00:11:42.352 "nvmf_stop_mdns_prr", 00:11:42.352 "nvmf_publish_mdns_prr", 00:11:42.352 "nvmf_subsystem_get_listeners", 00:11:42.352 "nvmf_subsystem_get_qpairs", 00:11:42.352 "nvmf_subsystem_get_controllers", 00:11:42.352 "nvmf_get_stats", 00:11:42.352 "nvmf_get_transports", 00:11:42.352 "nvmf_create_transport", 00:11:42.352 "nvmf_get_targets", 00:11:42.352 "nvmf_delete_target", 00:11:42.352 "nvmf_create_target", 00:11:42.352 "nvmf_subsystem_allow_any_host", 00:11:42.352 "nvmf_subsystem_set_keys", 00:11:42.352 "nvmf_subsystem_remove_host", 00:11:42.352 "nvmf_subsystem_add_host", 00:11:42.352 "nvmf_ns_remove_host", 00:11:42.352 "nvmf_ns_add_host", 00:11:42.352 "nvmf_subsystem_remove_ns", 00:11:42.352 "nvmf_subsystem_set_ns_ana_group", 00:11:42.352 "nvmf_subsystem_add_ns", 00:11:42.352 "nvmf_subsystem_listener_set_ana_state", 00:11:42.352 "nvmf_discovery_get_referrals", 00:11:42.352 "nvmf_discovery_remove_referral", 00:11:42.352 "nvmf_discovery_add_referral", 00:11:42.352 "nvmf_subsystem_remove_listener", 00:11:42.352 "nvmf_subsystem_add_listener", 00:11:42.352 "nvmf_delete_subsystem", 00:11:42.352 "nvmf_create_subsystem", 00:11:42.352 "nvmf_get_subsystems", 00:11:42.352 "env_dpdk_get_mem_stats", 00:11:42.352 "nbd_get_disks", 00:11:42.352 "nbd_stop_disk", 00:11:42.352 "nbd_start_disk", 00:11:42.352 "ublk_recover_disk", 00:11:42.352 "ublk_get_disks", 00:11:42.352 "ublk_stop_disk", 00:11:42.352 "ublk_start_disk", 00:11:42.352 "ublk_destroy_target", 00:11:42.352 "ublk_create_target", 00:11:42.352 "virtio_blk_create_transport", 00:11:42.352 "virtio_blk_get_transports", 00:11:42.352 "vhost_controller_set_coalescing", 00:11:42.352 "vhost_get_controllers", 00:11:42.352 "vhost_delete_controller", 00:11:42.352 "vhost_create_blk_controller", 00:11:42.352 "vhost_scsi_controller_remove_target", 00:11:42.352 "vhost_scsi_controller_add_target", 00:11:42.352 "vhost_start_scsi_controller", 00:11:42.352 "vhost_create_scsi_controller", 00:11:42.352 "thread_set_cpumask", 00:11:42.352 "scheduler_set_options", 00:11:42.352 "framework_get_governor", 00:11:42.352 "framework_get_scheduler", 00:11:42.352 "framework_set_scheduler", 00:11:42.352 "framework_get_reactors", 00:11:42.352 "thread_get_io_channels", 00:11:42.352 "thread_get_pollers", 00:11:42.352 "thread_get_stats", 00:11:42.352 "framework_monitor_context_switch", 00:11:42.352 "spdk_kill_instance", 00:11:42.352 "log_enable_timestamps", 00:11:42.352 "log_get_flags", 00:11:42.352 "log_clear_flag", 00:11:42.352 "log_set_flag", 00:11:42.352 "log_get_level", 00:11:42.352 "log_set_level", 00:11:42.352 "log_get_print_level", 00:11:42.352 "log_set_print_level", 00:11:42.352 "framework_enable_cpumask_locks", 00:11:42.352 "framework_disable_cpumask_locks", 00:11:42.352 "framework_wait_init", 00:11:42.352 "framework_start_init", 00:11:42.352 "scsi_get_devices", 00:11:42.352 "bdev_get_histogram", 00:11:42.352 "bdev_enable_histogram", 00:11:42.352 "bdev_set_qos_limit", 00:11:42.352 "bdev_set_qd_sampling_period", 00:11:42.352 "bdev_get_bdevs", 00:11:42.352 "bdev_reset_iostat", 00:11:42.352 "bdev_get_iostat", 00:11:42.352 "bdev_examine", 00:11:42.352 "bdev_wait_for_examine", 00:11:42.352 "bdev_set_options", 00:11:42.352 "accel_get_stats", 00:11:42.352 "accel_set_options", 00:11:42.352 "accel_set_driver", 00:11:42.352 "accel_crypto_key_destroy", 00:11:42.352 "accel_crypto_keys_get", 00:11:42.352 "accel_crypto_key_create", 00:11:42.352 "accel_assign_opc", 00:11:42.352 "accel_get_module_info", 00:11:42.352 "accel_get_opc_assignments", 00:11:42.352 "vmd_rescan", 00:11:42.352 "vmd_remove_device", 00:11:42.352 "vmd_enable", 00:11:42.352 "sock_get_default_impl", 00:11:42.352 "sock_set_default_impl", 00:11:42.352 "sock_impl_set_options", 00:11:42.352 "sock_impl_get_options", 00:11:42.352 "iobuf_get_stats", 00:11:42.352 "iobuf_set_options", 00:11:42.352 "keyring_get_keys", 00:11:42.352 "framework_get_pci_devices", 00:11:42.352 "framework_get_config", 00:11:42.352 "framework_get_subsystems", 00:11:42.352 "fsdev_set_opts", 00:11:42.352 "fsdev_get_opts", 00:11:42.352 "trace_get_info", 00:11:42.352 "trace_get_tpoint_group_mask", 00:11:42.352 "trace_disable_tpoint_group", 00:11:42.352 "trace_enable_tpoint_group", 00:11:42.352 "trace_clear_tpoint_mask", 00:11:42.352 "trace_set_tpoint_mask", 00:11:42.352 "notify_get_notifications", 00:11:42.352 "notify_get_types", 00:11:42.352 "spdk_get_version", 00:11:42.352 "rpc_get_methods" 00:11:42.352 ] 00:11:42.352 12:42:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:42.352 12:42:42 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.352 12:42:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.352 12:42:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:42.352 12:42:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70359 00:11:42.352 12:42:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70359 ']' 00:11:42.352 12:42:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70359 00:11:42.352 12:42:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:11:42.609 12:42:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.609 12:42:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70359 00:11:42.609 12:42:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.609 12:42:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.609 12:42:42 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70359' 00:11:42.609 killing process with pid 70359 00:11:42.609 12:42:42 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70359 00:11:42.609 12:42:42 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70359 00:11:42.865 00:11:42.865 real 0m1.695s 00:11:42.865 user 0m2.992s 00:11:42.865 sys 0m0.462s 00:11:42.865 12:42:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.865 12:42:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:42.865 ************************************ 00:11:42.865 END TEST spdkcli_tcp 00:11:42.865 ************************************ 00:11:42.865 12:42:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:42.865 12:42:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.865 12:42:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.865 12:42:42 -- common/autotest_common.sh@10 -- # set +x 00:11:42.865 ************************************ 00:11:42.865 START TEST dpdk_mem_utility 00:11:42.865 ************************************ 00:11:42.866 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:42.866 * Looking for test storage... 00:11:42.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:42.866 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.866 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.866 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.122 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.122 12:42:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:11:43.122 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.122 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.122 --rc genhtml_branch_coverage=1 00:11:43.122 --rc genhtml_function_coverage=1 00:11:43.122 --rc genhtml_legend=1 00:11:43.122 --rc geninfo_all_blocks=1 00:11:43.122 --rc geninfo_unexecuted_blocks=1 00:11:43.122 00:11:43.122 ' 00:11:43.122 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.122 --rc genhtml_branch_coverage=1 00:11:43.122 --rc genhtml_function_coverage=1 00:11:43.122 --rc genhtml_legend=1 00:11:43.122 --rc geninfo_all_blocks=1 00:11:43.122 --rc geninfo_unexecuted_blocks=1 00:11:43.122 00:11:43.122 ' 00:11:43.122 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.122 --rc genhtml_branch_coverage=1 00:11:43.122 --rc genhtml_function_coverage=1 00:11:43.122 --rc genhtml_legend=1 00:11:43.122 --rc geninfo_all_blocks=1 00:11:43.122 --rc geninfo_unexecuted_blocks=1 00:11:43.122 00:11:43.122 ' 00:11:43.122 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.122 --rc genhtml_branch_coverage=1 00:11:43.122 --rc genhtml_function_coverage=1 00:11:43.122 --rc genhtml_legend=1 00:11:43.122 --rc geninfo_all_blocks=1 00:11:43.122 --rc geninfo_unexecuted_blocks=1 00:11:43.122 00:11:43.122 ' 00:11:43.123 12:42:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:43.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.123 12:42:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70459 00:11:43.123 12:42:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70459 00:11:43.123 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70459 ']' 00:11:43.123 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.123 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.123 12:42:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:43.123 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.123 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.123 12:42:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:43.123 [2024-12-05 12:42:42.819907] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:43.123 [2024-12-05 12:42:42.820043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70459 ] 00:11:43.451 [2024-12-05 12:42:42.982218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.451 [2024-12-05 12:42:43.007746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.018 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.018 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:11:44.018 12:42:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:44.018 12:42:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:44.018 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.018 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:44.018 { 00:11:44.018 "filename": "/tmp/spdk_mem_dump.txt" 00:11:44.018 } 00:11:44.018 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.018 12:42:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:44.018 DPDK memory size 818.000000 MiB in 1 heap(s) 00:11:44.018 1 heaps totaling size 818.000000 MiB 00:11:44.018 size: 818.000000 MiB heap id: 0 00:11:44.018 end heaps---------- 00:11:44.018 9 mempools totaling size 603.782043 MiB 00:11:44.018 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:44.018 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:44.018 size: 100.555481 MiB name: bdev_io_70459 00:11:44.018 size: 50.003479 MiB name: msgpool_70459 00:11:44.018 size: 36.509338 MiB name: fsdev_io_70459 00:11:44.018 size: 21.763794 MiB name: PDU_Pool 00:11:44.018 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:44.018 size: 4.133484 MiB name: evtpool_70459 00:11:44.018 size: 0.026123 MiB name: Session_Pool 00:11:44.018 end mempools------- 00:11:44.018 6 memzones totaling size 4.142822 MiB 00:11:44.018 size: 1.000366 MiB name: RG_ring_0_70459 00:11:44.018 size: 1.000366 MiB name: RG_ring_1_70459 00:11:44.018 size: 1.000366 MiB name: RG_ring_4_70459 00:11:44.018 size: 1.000366 MiB name: RG_ring_5_70459 00:11:44.018 size: 0.125366 MiB name: RG_ring_2_70459 00:11:44.018 size: 0.015991 MiB name: RG_ring_3_70459 00:11:44.018 end memzones------- 00:11:44.018 12:42:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:44.018 heap id: 0 total size: 818.000000 MiB number of busy elements: 320 number of free elements: 15 00:11:44.018 list of free elements. size: 10.801941 MiB 00:11:44.018 element at address: 0x200019200000 with size: 0.999878 MiB 00:11:44.019 element at address: 0x200019400000 with size: 0.999878 MiB 00:11:44.019 element at address: 0x200032000000 with size: 0.994446 MiB 00:11:44.019 element at address: 0x200000400000 with size: 0.993958 MiB 00:11:44.019 element at address: 0x200006400000 with size: 0.959839 MiB 00:11:44.019 element at address: 0x200012c00000 with size: 0.944275 MiB 00:11:44.019 element at address: 0x200019600000 with size: 0.936584 MiB 00:11:44.019 element at address: 0x200000200000 with size: 0.717346 MiB 00:11:44.019 element at address: 0x20001ae00000 with size: 0.567139 MiB 00:11:44.019 element at address: 0x20000a600000 with size: 0.488892 MiB 00:11:44.019 element at address: 0x200000c00000 with size: 0.486267 MiB 00:11:44.019 element at address: 0x200019800000 with size: 0.485657 MiB 00:11:44.019 element at address: 0x200003e00000 with size: 0.480286 MiB 00:11:44.019 element at address: 0x200028200000 with size: 0.395752 MiB 00:11:44.019 element at address: 0x200000800000 with size: 0.351746 MiB 00:11:44.019 list of standard malloc elements. size: 199.269165 MiB 00:11:44.019 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:11:44.019 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:11:44.019 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:44.019 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:11:44.019 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:11:44.019 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:44.019 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:11:44.019 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:44.019 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:11:44.019 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000085e580 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087e840 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087e900 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f080 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f140 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f200 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f380 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f440 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f500 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000087f680 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000cff000 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:11:44.019 element at address: 0x200003efb980 with size: 0.000183 MiB 00:11:44.019 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:11:44.019 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:11:44.020 element at address: 0x200028265500 with size: 0.000183 MiB 00:11:44.020 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c480 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c540 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c600 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c780 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c840 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c900 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d080 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d140 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d200 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d380 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d440 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d500 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d680 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d740 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d800 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826d980 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826da40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826db00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826de00 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826df80 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e040 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e100 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e280 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e340 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e400 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e580 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e640 with size: 0.000183 MiB 00:11:44.020 element at address: 0x20002826e700 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826e880 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826e940 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f000 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f180 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f240 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f300 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f480 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f540 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f600 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f780 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f840 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f900 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:11:44.021 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:11:44.021 list of memzone associated elements. size: 607.928894 MiB 00:11:44.021 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:11:44.021 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:44.021 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:11:44.021 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:44.021 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:11:44.021 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_70459_0 00:11:44.021 element at address: 0x200000dff380 with size: 48.003052 MiB 00:11:44.021 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70459_0 00:11:44.021 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:11:44.021 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70459_0 00:11:44.021 element at address: 0x2000199be940 with size: 20.255554 MiB 00:11:44.021 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:44.021 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:11:44.021 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:44.021 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:11:44.021 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70459_0 00:11:44.021 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:11:44.021 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70459 00:11:44.021 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:44.021 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70459 00:11:44.021 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:11:44.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:44.021 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:11:44.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:44.021 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:11:44.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:44.021 element at address: 0x200003efba40 with size: 1.008118 MiB 00:11:44.021 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:44.021 element at address: 0x200000cff180 with size: 1.000488 MiB 00:11:44.021 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70459 00:11:44.021 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:11:44.021 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70459 00:11:44.021 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:11:44.021 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70459 00:11:44.021 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:11:44.021 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70459 00:11:44.021 element at address: 0x20000087f740 with size: 0.500488 MiB 00:11:44.021 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70459 00:11:44.021 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:11:44.021 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70459 00:11:44.021 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:11:44.021 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:44.021 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:11:44.021 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:44.021 element at address: 0x20001987c540 with size: 0.250488 MiB 00:11:44.021 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:44.021 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:11:44.021 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70459 00:11:44.021 element at address: 0x20000085e640 with size: 0.125488 MiB 00:11:44.021 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70459 00:11:44.021 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:11:44.021 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:44.021 element at address: 0x200028265680 with size: 0.023743 MiB 00:11:44.021 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:44.021 element at address: 0x20000085a380 with size: 0.016113 MiB 00:11:44.021 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70459 00:11:44.021 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:11:44.021 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:44.021 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:11:44.021 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70459 00:11:44.021 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:11:44.021 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70459 00:11:44.021 element at address: 0x20000085a180 with size: 0.000305 MiB 00:11:44.021 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70459 00:11:44.021 element at address: 0x20002826c280 with size: 0.000305 MiB 00:11:44.021 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:44.021 12:42:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:44.021 12:42:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70459 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70459 ']' 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70459 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70459 00:11:44.021 killing process with pid 70459 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70459' 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70459 00:11:44.021 12:42:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70459 00:11:44.585 00:11:44.586 real 0m1.541s 00:11:44.586 user 0m1.560s 00:11:44.586 sys 0m0.423s 00:11:44.586 12:42:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.586 12:42:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:44.586 ************************************ 00:11:44.586 END TEST dpdk_mem_utility 00:11:44.586 ************************************ 00:11:44.586 12:42:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:44.586 12:42:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.586 12:42:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.586 12:42:44 -- common/autotest_common.sh@10 -- # set +x 00:11:44.586 ************************************ 00:11:44.586 START TEST event 00:11:44.586 ************************************ 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:44.586 * Looking for test storage... 00:11:44.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.586 12:42:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.586 12:42:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.586 12:42:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.586 12:42:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.586 12:42:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.586 12:42:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.586 12:42:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.586 12:42:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.586 12:42:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.586 12:42:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.586 12:42:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.586 12:42:44 event -- scripts/common.sh@344 -- # case "$op" in 00:11:44.586 12:42:44 event -- scripts/common.sh@345 -- # : 1 00:11:44.586 12:42:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.586 12:42:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.586 12:42:44 event -- scripts/common.sh@365 -- # decimal 1 00:11:44.586 12:42:44 event -- scripts/common.sh@353 -- # local d=1 00:11:44.586 12:42:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.586 12:42:44 event -- scripts/common.sh@355 -- # echo 1 00:11:44.586 12:42:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.586 12:42:44 event -- scripts/common.sh@366 -- # decimal 2 00:11:44.586 12:42:44 event -- scripts/common.sh@353 -- # local d=2 00:11:44.586 12:42:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.586 12:42:44 event -- scripts/common.sh@355 -- # echo 2 00:11:44.586 12:42:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.586 12:42:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.586 12:42:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.586 12:42:44 event -- scripts/common.sh@368 -- # return 0 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.586 --rc genhtml_branch_coverage=1 00:11:44.586 --rc genhtml_function_coverage=1 00:11:44.586 --rc genhtml_legend=1 00:11:44.586 --rc geninfo_all_blocks=1 00:11:44.586 --rc geninfo_unexecuted_blocks=1 00:11:44.586 00:11:44.586 ' 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.586 --rc genhtml_branch_coverage=1 00:11:44.586 --rc genhtml_function_coverage=1 00:11:44.586 --rc genhtml_legend=1 00:11:44.586 --rc geninfo_all_blocks=1 00:11:44.586 --rc geninfo_unexecuted_blocks=1 00:11:44.586 00:11:44.586 ' 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.586 --rc genhtml_branch_coverage=1 00:11:44.586 --rc genhtml_function_coverage=1 00:11:44.586 --rc genhtml_legend=1 00:11:44.586 --rc geninfo_all_blocks=1 00:11:44.586 --rc geninfo_unexecuted_blocks=1 00:11:44.586 00:11:44.586 ' 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.586 --rc genhtml_branch_coverage=1 00:11:44.586 --rc genhtml_function_coverage=1 00:11:44.586 --rc genhtml_legend=1 00:11:44.586 --rc geninfo_all_blocks=1 00:11:44.586 --rc geninfo_unexecuted_blocks=1 00:11:44.586 00:11:44.586 ' 00:11:44.586 12:42:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:44.586 12:42:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:44.586 12:42:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:44.586 12:42:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.586 12:42:44 event -- common/autotest_common.sh@10 -- # set +x 00:11:44.586 ************************************ 00:11:44.586 START TEST event_perf 00:11:44.586 ************************************ 00:11:44.586 12:42:44 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:44.586 Running I/O for 1 seconds...[2024-12-05 12:42:44.351986] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:44.586 [2024-12-05 12:42:44.352113] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70534 ] 00:11:44.844 [2024-12-05 12:42:44.510952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.844 [2024-12-05 12:42:44.542250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.844 [2024-12-05 12:42:44.542666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.844 Running I/O for 1 seconds...[2024-12-05 12:42:44.544128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.844 [2024-12-05 12:42:44.544190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.783 00:11:45.783 lcore 0: 147029 00:11:45.783 lcore 1: 147031 00:11:45.783 lcore 2: 147033 00:11:45.783 lcore 3: 147029 00:11:45.783 done. 00:11:45.783 00:11:45.783 real 0m1.274s 00:11:45.783 user 0m4.055s 00:11:45.783 sys 0m0.093s 00:11:45.783 12:42:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.783 12:42:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:45.783 ************************************ 00:11:45.783 END TEST event_perf 00:11:45.783 ************************************ 00:11:46.041 12:42:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:46.041 12:42:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.041 12:42:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.041 12:42:45 event -- common/autotest_common.sh@10 -- # set +x 00:11:46.041 ************************************ 00:11:46.041 START TEST event_reactor 00:11:46.041 ************************************ 00:11:46.041 12:42:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:46.041 [2024-12-05 12:42:45.677207] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:46.041 [2024-12-05 12:42:45.677321] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70573 ] 00:11:46.041 [2024-12-05 12:42:45.834950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.041 [2024-12-05 12:42:45.860046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.411 test_start 00:11:47.411 oneshot 00:11:47.411 tick 100 00:11:47.411 tick 100 00:11:47.411 tick 250 00:11:47.411 tick 100 00:11:47.411 tick 100 00:11:47.411 tick 250 00:11:47.411 tick 100 00:11:47.411 tick 500 00:11:47.411 tick 100 00:11:47.411 tick 100 00:11:47.411 tick 250 00:11:47.411 tick 100 00:11:47.411 tick 100 00:11:47.411 test_end 00:11:47.411 00:11:47.411 real 0m1.259s 00:11:47.411 user 0m1.092s 00:11:47.411 sys 0m0.060s 00:11:47.411 12:42:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.411 12:42:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:47.411 ************************************ 00:11:47.411 END TEST event_reactor 00:11:47.411 ************************************ 00:11:47.412 12:42:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:47.412 12:42:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.412 12:42:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.412 12:42:46 event -- common/autotest_common.sh@10 -- # set +x 00:11:47.412 ************************************ 00:11:47.412 START TEST event_reactor_perf 00:11:47.412 ************************************ 00:11:47.412 12:42:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:47.412 [2024-12-05 12:42:46.990983] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:47.412 [2024-12-05 12:42:46.991153] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70610 ] 00:11:47.412 [2024-12-05 12:42:47.162398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.412 [2024-12-05 12:42:47.187208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.784 test_start 00:11:48.784 test_end 00:11:48.784 Performance: 315334 events per second 00:11:48.784 00:11:48.784 real 0m1.277s 00:11:48.784 user 0m1.098s 00:11:48.784 sys 0m0.071s 00:11:48.784 12:42:48 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.784 12:42:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:48.784 ************************************ 00:11:48.784 END TEST event_reactor_perf 00:11:48.784 ************************************ 00:11:48.784 12:42:48 event -- event/event.sh@49 -- # uname -s 00:11:48.784 12:42:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:48.784 12:42:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:48.784 12:42:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:48.784 12:42:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.784 12:42:48 event -- common/autotest_common.sh@10 -- # set +x 00:11:48.784 ************************************ 00:11:48.784 START TEST event_scheduler 00:11:48.784 ************************************ 00:11:48.784 12:42:48 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:48.784 * Looking for test storage... 00:11:48.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:48.784 12:42:48 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.784 12:42:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.785 12:42:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.785 --rc genhtml_branch_coverage=1 00:11:48.785 --rc genhtml_function_coverage=1 00:11:48.785 --rc genhtml_legend=1 00:11:48.785 --rc geninfo_all_blocks=1 00:11:48.785 --rc geninfo_unexecuted_blocks=1 00:11:48.785 00:11:48.785 ' 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.785 --rc genhtml_branch_coverage=1 00:11:48.785 --rc genhtml_function_coverage=1 00:11:48.785 --rc genhtml_legend=1 00:11:48.785 --rc geninfo_all_blocks=1 00:11:48.785 --rc geninfo_unexecuted_blocks=1 00:11:48.785 00:11:48.785 ' 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.785 --rc genhtml_branch_coverage=1 00:11:48.785 --rc genhtml_function_coverage=1 00:11:48.785 --rc genhtml_legend=1 00:11:48.785 --rc geninfo_all_blocks=1 00:11:48.785 --rc geninfo_unexecuted_blocks=1 00:11:48.785 00:11:48.785 ' 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.785 --rc genhtml_branch_coverage=1 00:11:48.785 --rc genhtml_function_coverage=1 00:11:48.785 --rc genhtml_legend=1 00:11:48.785 --rc geninfo_all_blocks=1 00:11:48.785 --rc geninfo_unexecuted_blocks=1 00:11:48.785 00:11:48.785 ' 00:11:48.785 12:42:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:48.785 12:42:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70675 00:11:48.785 12:42:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:48.785 12:42:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70675 00:11:48.785 12:42:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70675 ']' 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.785 12:42:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:48.785 [2024-12-05 12:42:48.518560] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:48.785 [2024-12-05 12:42:48.518690] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70675 ] 00:11:49.043 [2024-12-05 12:42:48.677927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.043 [2024-12-05 12:42:48.706399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.043 [2024-12-05 12:42:48.706530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.043 [2024-12-05 12:42:48.707364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.043 [2024-12-05 12:42:48.707434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.608 12:42:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.608 12:42:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:11:49.608 12:42:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:49.608 12:42:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.608 12:42:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:49.608 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:49.608 POWER: Cannot set governor of lcore 0 to userspace 00:11:49.608 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:49.608 POWER: Cannot set governor of lcore 0 to performance 00:11:49.608 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:49.608 POWER: Cannot set governor of lcore 0 to userspace 00:11:49.608 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:49.608 POWER: Cannot set governor of lcore 0 to userspace 00:11:49.608 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:49.608 POWER: Unable to set Power Management Environment for lcore 0 00:11:49.608 [2024-12-05 12:42:49.372785] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:11:49.608 [2024-12-05 12:42:49.372820] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:11:49.608 [2024-12-05 12:42:49.372831] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:11:49.608 [2024-12-05 12:42:49.372860] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:49.608 [2024-12-05 12:42:49.372880] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:49.608 [2024-12-05 12:42:49.372890] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:49.608 12:42:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.608 12:42:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:49.608 12:42:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.608 12:42:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:49.608 [2024-12-05 12:42:49.446295] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:49.609 12:42:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.609 12:42:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:49.609 12:42:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:49.609 12:42:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.609 12:42:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:49.866 ************************************ 00:11:49.866 START TEST scheduler_create_thread 00:11:49.866 ************************************ 00:11:49.866 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:11:49.866 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:49.866 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 2 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 3 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 4 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 5 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 6 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 7 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 8 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 9 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 10 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.867 12:42:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:50.436 12:42:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.436 12:42:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:50.436 12:42:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:50.436 12:42:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.436 12:42:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:51.397 ************************************ 00:11:51.397 END TEST scheduler_create_thread 00:11:51.397 ************************************ 00:11:51.397 12:42:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.397 00:11:51.397 real 0m1.753s 00:11:51.397 user 0m0.016s 00:11:51.397 sys 0m0.003s 00:11:51.397 12:42:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.397 12:42:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:51.656 12:42:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:51.656 12:42:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70675 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70675 ']' 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70675 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70675 00:11:51.656 killing process with pid 70675 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70675' 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70675 00:11:51.656 12:42:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70675 00:11:51.914 [2024-12-05 12:42:51.693271] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:52.170 ************************************ 00:11:52.170 END TEST event_scheduler 00:11:52.170 00:11:52.170 real 0m3.572s 00:11:52.170 user 0m6.201s 00:11:52.170 sys 0m0.370s 00:11:52.170 12:42:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.170 12:42:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 ************************************ 00:11:52.170 12:42:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:52.170 12:42:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:52.170 12:42:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.170 12:42:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.170 12:42:51 event -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 ************************************ 00:11:52.170 START TEST app_repeat 00:11:52.170 ************************************ 00:11:52.170 12:42:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:11:52.170 12:42:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.170 12:42:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.170 12:42:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:52.170 12:42:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:52.171 Process app_repeat pid: 70770 00:11:52.171 spdk_app_start Round 0 00:11:52.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70770 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70770' 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70770 /var/tmp/spdk-nbd.sock 00:11:52.171 12:42:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70770 ']' 00:11:52.171 12:42:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:52.171 12:42:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:52.171 12:42:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.171 12:42:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:52.171 12:42:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.171 12:42:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:52.171 [2024-12-05 12:42:51.959490] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:11:52.171 [2024-12-05 12:42:51.959754] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70770 ] 00:11:52.427 [2024-12-05 12:42:52.113569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:52.427 [2024-12-05 12:42:52.140437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.427 [2024-12-05 12:42:52.140511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.360 12:42:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.360 12:42:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:53.360 12:42:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:53.360 Malloc0 00:11:53.360 12:42:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:53.617 Malloc1 00:11:53.617 12:42:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:53.617 12:42:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:53.617 /dev/nbd0 00:11:53.879 12:42:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:53.879 12:42:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:53.879 1+0 records in 00:11:53.879 1+0 records out 00:11:53.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174599 s, 23.5 MB/s 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:53.879 12:42:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:53.879 12:42:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.879 12:42:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:53.879 12:42:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:53.879 /dev/nbd1 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:54.236 1+0 records in 00:11:54.236 1+0 records out 00:11:54.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189248 s, 21.6 MB/s 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:54.236 12:42:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:54.236 { 00:11:54.236 "nbd_device": "/dev/nbd0", 00:11:54.236 "bdev_name": "Malloc0" 00:11:54.236 }, 00:11:54.236 { 00:11:54.236 "nbd_device": "/dev/nbd1", 00:11:54.236 "bdev_name": "Malloc1" 00:11:54.236 } 00:11:54.236 ]' 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:54.236 { 00:11:54.236 "nbd_device": "/dev/nbd0", 00:11:54.236 "bdev_name": "Malloc0" 00:11:54.236 }, 00:11:54.236 { 00:11:54.236 "nbd_device": "/dev/nbd1", 00:11:54.236 "bdev_name": "Malloc1" 00:11:54.236 } 00:11:54.236 ]' 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:54.236 12:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:54.236 /dev/nbd1' 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:54.236 /dev/nbd1' 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:54.236 256+0 records in 00:11:54.236 256+0 records out 00:11:54.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516573 s, 203 MB/s 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:54.236 256+0 records in 00:11:54.236 256+0 records out 00:11:54.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216397 s, 48.5 MB/s 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:54.236 12:42:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:54.495 256+0 records in 00:11:54.495 256+0 records out 00:11:54.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230948 s, 45.4 MB/s 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.495 12:42:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:54.754 12:42:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:54.754 12:42:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.755 12:42:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:55.015 12:42:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:55.015 12:42:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:55.275 12:42:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:55.532 [2024-12-05 12:42:55.149305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:55.532 [2024-12-05 12:42:55.173548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.532 [2024-12-05 12:42:55.173711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.532 [2024-12-05 12:42:55.215766] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:55.532 [2024-12-05 12:42:55.215846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:58.877 12:42:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:58.877 spdk_app_start Round 1 00:11:58.877 12:42:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:58.877 12:42:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70770 /var/tmp/spdk-nbd.sock 00:11:58.877 12:42:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70770 ']' 00:11:58.877 12:42:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:58.877 12:42:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.877 12:42:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:58.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:58.877 12:42:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.877 12:42:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:58.877 12:42:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.877 12:42:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:58.877 12:42:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:58.877 Malloc0 00:11:58.877 12:42:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:58.877 Malloc1 00:11:58.877 12:42:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:58.877 12:42:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.877 12:42:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:58.877 12:42:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:58.877 12:42:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.877 12:42:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:58.877 12:42:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:58.877 12:42:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.877 12:42:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:58.878 12:42:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.878 12:42:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.878 12:42:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.878 12:42:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:58.878 12:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.878 12:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.878 12:42:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:59.136 /dev/nbd0 00:11:59.136 12:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:59.136 12:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:59.136 1+0 records in 00:11:59.136 1+0 records out 00:11:59.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458015 s, 8.9 MB/s 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:59.136 12:42:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:59.136 12:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.136 12:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:59.136 12:42:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:59.393 /dev/nbd1 00:11:59.393 12:42:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:59.393 12:42:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:59.393 1+0 records in 00:11:59.393 1+0 records out 00:11:59.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491589 s, 8.3 MB/s 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:59.393 12:42:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:59.393 12:42:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.393 12:42:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:59.393 12:42:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:59.393 12:42:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.393 12:42:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:59.650 { 00:11:59.650 "nbd_device": "/dev/nbd0", 00:11:59.650 "bdev_name": "Malloc0" 00:11:59.650 }, 00:11:59.650 { 00:11:59.650 "nbd_device": "/dev/nbd1", 00:11:59.650 "bdev_name": "Malloc1" 00:11:59.650 } 00:11:59.650 ]' 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:59.650 { 00:11:59.650 "nbd_device": "/dev/nbd0", 00:11:59.650 "bdev_name": "Malloc0" 00:11:59.650 }, 00:11:59.650 { 00:11:59.650 "nbd_device": "/dev/nbd1", 00:11:59.650 "bdev_name": "Malloc1" 00:11:59.650 } 00:11:59.650 ]' 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:59.650 /dev/nbd1' 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:59.650 /dev/nbd1' 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:59.650 12:42:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:59.651 256+0 records in 00:11:59.651 256+0 records out 00:11:59.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655415 s, 160 MB/s 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:59.651 256+0 records in 00:11:59.651 256+0 records out 00:11:59.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019941 s, 52.6 MB/s 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.651 12:42:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:59.908 256+0 records in 00:11:59.908 256+0 records out 00:11:59.908 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237333 s, 44.2 MB/s 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.908 12:42:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.166 12:42:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:00.424 12:43:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:00.424 12:43:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:00.681 12:43:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:00.939 [2024-12-05 12:43:00.598496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:00.939 [2024-12-05 12:43:00.623375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.939 [2024-12-05 12:43:00.623394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.939 [2024-12-05 12:43:00.666266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:00.939 [2024-12-05 12:43:00.666331] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:04.293 12:43:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:04.293 12:43:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:04.293 spdk_app_start Round 2 00:12:04.293 12:43:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70770 /var/tmp/spdk-nbd.sock 00:12:04.293 12:43:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70770 ']' 00:12:04.293 12:43:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:04.293 12:43:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:04.293 12:43:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:04.293 12:43:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.293 12:43:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:04.293 12:43:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.293 12:43:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:04.293 12:43:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:04.293 Malloc0 00:12:04.293 12:43:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:04.552 Malloc1 00:12:04.552 12:43:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:04.552 12:43:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:04.809 /dev/nbd0 00:12:04.809 12:43:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.809 12:43:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.809 12:43:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:04.809 12:43:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:04.809 12:43:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:04.809 12:43:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:04.809 12:43:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:04.809 12:43:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:04.809 12:43:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:04.809 12:43:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:04.810 12:43:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:04.810 1+0 records in 00:12:04.810 1+0 records out 00:12:04.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253552 s, 16.2 MB/s 00:12:04.810 12:43:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:04.810 12:43:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:04.810 12:43:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:04.810 12:43:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:04.810 12:43:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:04.810 12:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.810 12:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:04.810 12:43:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:04.810 /dev/nbd1 00:12:05.067 12:43:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:05.067 12:43:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:05.067 1+0 records in 00:12:05.067 1+0 records out 00:12:05.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226288 s, 18.1 MB/s 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:05.067 12:43:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:05.067 12:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.067 12:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.067 12:43:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:05.068 12:43:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.068 12:43:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.068 12:43:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:05.068 { 00:12:05.068 "nbd_device": "/dev/nbd0", 00:12:05.068 "bdev_name": "Malloc0" 00:12:05.068 }, 00:12:05.068 { 00:12:05.068 "nbd_device": "/dev/nbd1", 00:12:05.068 "bdev_name": "Malloc1" 00:12:05.068 } 00:12:05.068 ]' 00:12:05.068 12:43:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:05.068 { 00:12:05.068 "nbd_device": "/dev/nbd0", 00:12:05.068 "bdev_name": "Malloc0" 00:12:05.068 }, 00:12:05.068 { 00:12:05.068 "nbd_device": "/dev/nbd1", 00:12:05.068 "bdev_name": "Malloc1" 00:12:05.068 } 00:12:05.068 ]' 00:12:05.068 12:43:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:05.325 /dev/nbd1' 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:05.325 /dev/nbd1' 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:05.325 256+0 records in 00:12:05.325 256+0 records out 00:12:05.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631054 s, 166 MB/s 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:05.325 256+0 records in 00:12:05.325 256+0 records out 00:12:05.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152955 s, 68.6 MB/s 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:05.325 256+0 records in 00:12:05.325 256+0 records out 00:12:05.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183688 s, 57.1 MB/s 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.325 12:43:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:05.326 12:43:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.326 12:43:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.326 12:43:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.326 12:43:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:05.326 12:43:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.326 12:43:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.582 12:43:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:05.889 12:43:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:05.889 12:43:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:05.889 12:43:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:05.889 12:43:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.889 12:43:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.889 12:43:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:05.889 12:43:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:05.890 12:43:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.890 12:43:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:05.890 12:43:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.890 12:43:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.890 12:43:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:05.890 12:43:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:05.890 12:43:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:05.890 12:43:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:06.215 12:43:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:06.215 12:43:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:06.215 12:43:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:06.215 12:43:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:06.215 12:43:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:06.215 12:43:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:06.215 12:43:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:06.215 12:43:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:06.215 12:43:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:06.215 12:43:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:06.215 [2024-12-05 12:43:06.058063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.473 [2024-12-05 12:43:06.082782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.473 [2024-12-05 12:43:06.082786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.473 [2024-12-05 12:43:06.125929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:06.473 [2024-12-05 12:43:06.126001] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:09.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:09.752 12:43:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70770 /var/tmp/spdk-nbd.sock 00:12:09.752 12:43:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70770 ']' 00:12:09.752 12:43:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:09.752 12:43:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.752 12:43:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:09.752 12:43:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.752 12:43:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:09.752 12:43:09 event.app_repeat -- event/event.sh@39 -- # killprocess 70770 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70770 ']' 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70770 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70770 00:12:09.752 killing process with pid 70770 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70770' 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70770 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70770 00:12:09.752 spdk_app_start is called in Round 0. 00:12:09.752 Shutdown signal received, stop current app iteration 00:12:09.752 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 reinitialization... 00:12:09.752 spdk_app_start is called in Round 1. 00:12:09.752 Shutdown signal received, stop current app iteration 00:12:09.752 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 reinitialization... 00:12:09.752 spdk_app_start is called in Round 2. 00:12:09.752 Shutdown signal received, stop current app iteration 00:12:09.752 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 reinitialization... 00:12:09.752 spdk_app_start is called in Round 3. 00:12:09.752 Shutdown signal received, stop current app iteration 00:12:09.752 12:43:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:09.752 12:43:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:09.752 00:12:09.752 real 0m17.409s 00:12:09.752 user 0m38.943s 00:12:09.752 sys 0m2.293s 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.752 ************************************ 00:12:09.752 END TEST app_repeat 00:12:09.752 ************************************ 00:12:09.752 12:43:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:09.752 12:43:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:09.753 12:43:09 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:09.753 12:43:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.753 12:43:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.753 12:43:09 event -- common/autotest_common.sh@10 -- # set +x 00:12:09.753 ************************************ 00:12:09.753 START TEST cpu_locks 00:12:09.753 ************************************ 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:09.753 * Looking for test storage... 00:12:09.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.753 12:43:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:09.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.753 --rc genhtml_branch_coverage=1 00:12:09.753 --rc genhtml_function_coverage=1 00:12:09.753 --rc genhtml_legend=1 00:12:09.753 --rc geninfo_all_blocks=1 00:12:09.753 --rc geninfo_unexecuted_blocks=1 00:12:09.753 00:12:09.753 ' 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:09.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.753 --rc genhtml_branch_coverage=1 00:12:09.753 --rc genhtml_function_coverage=1 00:12:09.753 --rc genhtml_legend=1 00:12:09.753 --rc geninfo_all_blocks=1 00:12:09.753 --rc geninfo_unexecuted_blocks=1 00:12:09.753 00:12:09.753 ' 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:09.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.753 --rc genhtml_branch_coverage=1 00:12:09.753 --rc genhtml_function_coverage=1 00:12:09.753 --rc genhtml_legend=1 00:12:09.753 --rc geninfo_all_blocks=1 00:12:09.753 --rc geninfo_unexecuted_blocks=1 00:12:09.753 00:12:09.753 ' 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:09.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.753 --rc genhtml_branch_coverage=1 00:12:09.753 --rc genhtml_function_coverage=1 00:12:09.753 --rc genhtml_legend=1 00:12:09.753 --rc geninfo_all_blocks=1 00:12:09.753 --rc geninfo_unexecuted_blocks=1 00:12:09.753 00:12:09.753 ' 00:12:09.753 12:43:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:09.753 12:43:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:09.753 12:43:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:09.753 12:43:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.753 12:43:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:09.753 ************************************ 00:12:09.753 START TEST default_locks 00:12:09.753 ************************************ 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71195 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71195 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71195 ']' 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:09.753 12:43:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:10.064 [2024-12-05 12:43:09.611777] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:10.064 [2024-12-05 12:43:09.611924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71195 ] 00:12:10.064 [2024-12-05 12:43:09.771575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.064 [2024-12-05 12:43:09.796783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.634 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.634 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:12:10.634 12:43:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71195 00:12:10.634 12:43:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71195 00:12:10.634 12:43:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71195 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71195 ']' 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71195 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71195 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.895 killing process with pid 71195 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71195' 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71195 00:12:10.895 12:43:10 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71195 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71195 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71195 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71195 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71195 ']' 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:11.460 ERROR: process (pid: 71195) is no longer running 00:12:11.460 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71195) - No such process 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:11.460 00:12:11.460 real 0m1.507s 00:12:11.460 user 0m1.495s 00:12:11.460 sys 0m0.477s 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.460 ************************************ 00:12:11.460 END TEST default_locks 00:12:11.460 ************************************ 00:12:11.460 12:43:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:11.460 12:43:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:11.460 12:43:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.460 12:43:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.460 12:43:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:11.460 ************************************ 00:12:11.460 START TEST default_locks_via_rpc 00:12:11.460 ************************************ 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71237 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71237 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71237 ']' 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.461 12:43:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:11.461 [2024-12-05 12:43:11.169906] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:11.461 [2024-12-05 12:43:11.170044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71237 ] 00:12:11.717 [2024-12-05 12:43:11.325666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.717 [2024-12-05 12:43:11.351616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71237 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71237 00:12:12.282 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71237 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71237 ']' 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71237 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71237 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.538 killing process with pid 71237 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71237' 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71237 00:12:12.538 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71237 00:12:12.795 00:12:12.795 real 0m1.476s 00:12:12.795 user 0m1.482s 00:12:12.795 sys 0m0.450s 00:12:12.795 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.795 ************************************ 00:12:12.795 END TEST default_locks_via_rpc 00:12:12.795 ************************************ 00:12:12.795 12:43:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.795 12:43:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:12.795 12:43:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.795 12:43:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.795 12:43:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:12.795 ************************************ 00:12:12.795 START TEST non_locking_app_on_locked_coremask 00:12:12.795 ************************************ 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71289 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71289 /var/tmp/spdk.sock 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71289 ']' 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:12.795 12:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:13.052 [2024-12-05 12:43:12.700802] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:13.052 [2024-12-05 12:43:12.700940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71289 ] 00:12:13.052 [2024-12-05 12:43:12.857049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.052 [2024-12-05 12:43:12.881739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.999 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.999 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:13.999 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71305 00:12:13.999 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71305 /var/tmp/spdk2.sock 00:12:14.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:14.000 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71305 ']' 00:12:14.000 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:14.000 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.000 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:14.000 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:14.000 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.000 12:43:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:14.000 [2024-12-05 12:43:13.620696] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:14.000 [2024-12-05 12:43:13.620822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71305 ] 00:12:14.000 [2024-12-05 12:43:13.793307] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:14.000 [2024-12-05 12:43:13.793385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.000 [2024-12-05 12:43:13.846610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71289 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71289 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71289 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71289 ']' 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71289 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71289 00:12:14.933 killing process with pid 71289 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71289' 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71289 00:12:14.933 12:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71289 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71305 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71305 ']' 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71305 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71305 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71305' 00:12:15.864 killing process with pid 71305 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71305 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71305 00:12:15.864 00:12:15.864 real 0m3.082s 00:12:15.864 user 0m3.304s 00:12:15.864 sys 0m0.850s 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.864 ************************************ 00:12:15.864 END TEST non_locking_app_on_locked_coremask 00:12:15.864 ************************************ 00:12:15.864 12:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:16.122 12:43:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:16.122 12:43:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.122 12:43:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.122 12:43:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:16.122 ************************************ 00:12:16.122 START TEST locking_app_on_unlocked_coremask 00:12:16.122 ************************************ 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71363 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71363 /var/tmp/spdk.sock 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71363 ']' 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:16.122 12:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:16.122 [2024-12-05 12:43:15.835755] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:16.122 [2024-12-05 12:43:15.835890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71363 ] 00:12:16.380 [2024-12-05 12:43:15.994469] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:16.381 [2024-12-05 12:43:15.994535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.381 [2024-12-05 12:43:16.019210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71379 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71379 /var/tmp/spdk2.sock 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71379 ']' 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.947 12:43:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:16.948 [2024-12-05 12:43:16.752768] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:16.948 [2024-12-05 12:43:16.752913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71379 ] 00:12:17.207 [2024-12-05 12:43:16.930761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.207 [2024-12-05 12:43:16.987187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.772 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.772 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:17.772 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71379 00:12:17.772 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71379 00:12:17.772 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71363 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71363 ']' 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71363 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71363 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.335 killing process with pid 71363 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71363' 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71363 00:12:18.335 12:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71363 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71379 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71379 ']' 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71379 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71379 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71379' 00:12:18.898 killing process with pid 71379 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71379 00:12:18.898 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71379 00:12:19.155 00:12:19.155 real 0m3.205s 00:12:19.155 user 0m3.447s 00:12:19.155 sys 0m0.878s 00:12:19.155 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.155 12:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:19.155 ************************************ 00:12:19.155 END TEST locking_app_on_unlocked_coremask 00:12:19.155 ************************************ 00:12:19.155 12:43:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:19.155 12:43:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.155 12:43:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.155 12:43:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:19.413 ************************************ 00:12:19.413 START TEST locking_app_on_locked_coremask 00:12:19.413 ************************************ 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71437 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71437 /var/tmp/spdk.sock 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71437 ']' 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.413 12:43:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:19.413 [2024-12-05 12:43:19.084533] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:19.413 [2024-12-05 12:43:19.084654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71437 ] 00:12:19.413 [2024-12-05 12:43:19.243366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.670 [2024-12-05 12:43:19.268566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71453 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71453 /var/tmp/spdk2.sock 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71453 /var/tmp/spdk2.sock 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71453 /var/tmp/spdk2.sock 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71453 ']' 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.603 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:20.603 [2024-12-05 12:43:20.136335] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:20.603 [2024-12-05 12:43:20.136474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71453 ] 00:12:20.603 [2024-12-05 12:43:20.312285] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71437 has claimed it. 00:12:20.603 [2024-12-05 12:43:20.312375] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:21.167 ERROR: process (pid: 71453) is no longer running 00:12:21.167 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71453) - No such process 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71437 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71437 00:12:21.167 12:43:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71437 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71437 ']' 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71437 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71437 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.424 killing process with pid 71437 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71437' 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71437 00:12:21.424 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71437 00:12:21.682 00:12:21.682 real 0m2.380s 00:12:21.682 user 0m2.702s 00:12:21.682 sys 0m0.626s 00:12:21.682 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.682 ************************************ 00:12:21.682 12:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:21.682 END TEST locking_app_on_locked_coremask 00:12:21.682 ************************************ 00:12:21.682 12:43:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:21.682 12:43:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.682 12:43:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.682 12:43:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:21.682 ************************************ 00:12:21.682 START TEST locking_overlapped_coremask 00:12:21.682 ************************************ 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71495 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71495 /var/tmp/spdk.sock 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71495 ']' 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:21.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.682 12:43:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:21.682 [2024-12-05 12:43:21.513966] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:21.682 [2024-12-05 12:43:21.514099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71495 ] 00:12:21.939 [2024-12-05 12:43:21.672032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:21.939 [2024-12-05 12:43:21.706435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.939 [2024-12-05 12:43:21.706640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.939 [2024-12-05 12:43:21.706748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71513 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71513 /var/tmp/spdk2.sock 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71513 /var/tmp/spdk2.sock 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71513 /var/tmp/spdk2.sock 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71513 ']' 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:22.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.874 12:43:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:22.874 [2024-12-05 12:43:22.439920] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:22.874 [2024-12-05 12:43:22.440051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71513 ] 00:12:22.874 [2024-12-05 12:43:22.618056] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71495 has claimed it. 00:12:22.874 [2024-12-05 12:43:22.618140] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:23.440 ERROR: process (pid: 71513) is no longer running 00:12:23.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71513) - No such process 00:12:23.440 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.440 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:12:23.440 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:12:23.440 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:23.440 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71495 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71495 ']' 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71495 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71495 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.441 killing process with pid 71495 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71495' 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71495 00:12:23.441 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71495 00:12:23.699 00:12:23.699 real 0m2.000s 00:12:23.699 user 0m5.453s 00:12:23.699 sys 0m0.477s 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.699 ************************************ 00:12:23.699 END TEST locking_overlapped_coremask 00:12:23.699 ************************************ 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:23.699 12:43:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:23.699 12:43:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.699 12:43:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.699 12:43:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:23.699 ************************************ 00:12:23.699 START TEST locking_overlapped_coremask_via_rpc 00:12:23.699 ************************************ 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71555 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71555 /var/tmp/spdk.sock 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71555 ']' 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.699 12:43:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:23.956 [2024-12-05 12:43:23.563877] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:23.956 [2024-12-05 12:43:23.564010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71555 ] 00:12:23.956 [2024-12-05 12:43:23.720841] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:23.956 [2024-12-05 12:43:23.720923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.956 [2024-12-05 12:43:23.748257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.956 [2024-12-05 12:43:23.748436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.956 [2024-12-05 12:43:23.748562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.891 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.891 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:24.891 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:24.891 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71573 00:12:24.891 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71573 /var/tmp/spdk2.sock 00:12:24.891 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71573 ']' 00:12:24.891 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:24.892 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:24.892 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:24.892 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.892 12:43:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.892 [2024-12-05 12:43:24.565897] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:24.892 [2024-12-05 12:43:24.566395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71573 ] 00:12:25.152 [2024-12-05 12:43:24.749528] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:25.152 [2024-12-05 12:43:24.749586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.152 [2024-12-05 12:43:24.795842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.152 [2024-12-05 12:43:24.799078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.152 [2024-12-05 12:43:24.799155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.720 [2024-12-05 12:43:25.506032] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71555 has claimed it. 00:12:25.720 request: 00:12:25.720 { 00:12:25.720 "method": "framework_enable_cpumask_locks", 00:12:25.720 "req_id": 1 00:12:25.720 } 00:12:25.720 Got JSON-RPC error response 00:12:25.720 response: 00:12:25.720 { 00:12:25.720 "code": -32603, 00:12:25.720 "message": "Failed to claim CPU core: 2" 00:12:25.720 } 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71555 /var/tmp/spdk.sock 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71555 ']' 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.720 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71573 /var/tmp/spdk2.sock 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71573 ']' 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.978 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.235 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.235 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:26.235 ************************************ 00:12:26.235 END TEST locking_overlapped_coremask_via_rpc 00:12:26.235 ************************************ 00:12:26.235 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:26.235 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:26.236 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:26.236 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:26.236 00:12:26.236 real 0m2.465s 00:12:26.236 user 0m1.232s 00:12:26.236 sys 0m0.161s 00:12:26.236 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.236 12:43:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.236 12:43:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:26.236 12:43:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71555 ]] 00:12:26.236 12:43:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71555 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71555 ']' 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71555 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71555 00:12:26.236 killing process with pid 71555 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71555' 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71555 00:12:26.236 12:43:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71555 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71573 ]] 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71573 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71573 ']' 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71573 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71573 00:12:26.802 killing process with pid 71573 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71573' 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71573 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71573 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:26.802 Process with pid 71555 is not found 00:12:26.802 Process with pid 71573 is not found 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71555 ]] 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71555 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71555 ']' 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71555 00:12:26.802 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71555) - No such process 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71555 is not found' 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71573 ]] 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71573 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71573 ']' 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71573 00:12:26.802 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71573) - No such process 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71573 is not found' 00:12:26.802 12:43:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:26.802 ************************************ 00:12:26.802 END TEST cpu_locks 00:12:26.802 ************************************ 00:12:26.802 00:12:26.802 real 0m17.267s 00:12:26.802 user 0m30.470s 00:12:26.802 sys 0m4.748s 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.802 12:43:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:27.060 ************************************ 00:12:27.060 END TEST event 00:12:27.060 ************************************ 00:12:27.060 00:12:27.060 real 0m42.492s 00:12:27.060 user 1m22.011s 00:12:27.060 sys 0m7.860s 00:12:27.060 12:43:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.060 12:43:26 event -- common/autotest_common.sh@10 -- # set +x 00:12:27.060 12:43:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:27.060 12:43:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:27.060 12:43:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.060 12:43:26 -- common/autotest_common.sh@10 -- # set +x 00:12:27.060 ************************************ 00:12:27.060 START TEST thread 00:12:27.060 ************************************ 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:27.060 * Looking for test storage... 00:12:27.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:27.060 12:43:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.060 12:43:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.060 12:43:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.060 12:43:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.060 12:43:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.060 12:43:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.060 12:43:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.060 12:43:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.060 12:43:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.060 12:43:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.060 12:43:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.060 12:43:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:27.060 12:43:26 thread -- scripts/common.sh@345 -- # : 1 00:12:27.060 12:43:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.060 12:43:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.060 12:43:26 thread -- scripts/common.sh@365 -- # decimal 1 00:12:27.060 12:43:26 thread -- scripts/common.sh@353 -- # local d=1 00:12:27.060 12:43:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.060 12:43:26 thread -- scripts/common.sh@355 -- # echo 1 00:12:27.060 12:43:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.060 12:43:26 thread -- scripts/common.sh@366 -- # decimal 2 00:12:27.060 12:43:26 thread -- scripts/common.sh@353 -- # local d=2 00:12:27.060 12:43:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.060 12:43:26 thread -- scripts/common.sh@355 -- # echo 2 00:12:27.060 12:43:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.060 12:43:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.060 12:43:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.060 12:43:26 thread -- scripts/common.sh@368 -- # return 0 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:27.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.060 --rc genhtml_branch_coverage=1 00:12:27.060 --rc genhtml_function_coverage=1 00:12:27.060 --rc genhtml_legend=1 00:12:27.060 --rc geninfo_all_blocks=1 00:12:27.060 --rc geninfo_unexecuted_blocks=1 00:12:27.060 00:12:27.060 ' 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:27.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.060 --rc genhtml_branch_coverage=1 00:12:27.060 --rc genhtml_function_coverage=1 00:12:27.060 --rc genhtml_legend=1 00:12:27.060 --rc geninfo_all_blocks=1 00:12:27.060 --rc geninfo_unexecuted_blocks=1 00:12:27.060 00:12:27.060 ' 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:27.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.060 --rc genhtml_branch_coverage=1 00:12:27.060 --rc genhtml_function_coverage=1 00:12:27.060 --rc genhtml_legend=1 00:12:27.060 --rc geninfo_all_blocks=1 00:12:27.060 --rc geninfo_unexecuted_blocks=1 00:12:27.060 00:12:27.060 ' 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:27.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.060 --rc genhtml_branch_coverage=1 00:12:27.060 --rc genhtml_function_coverage=1 00:12:27.060 --rc genhtml_legend=1 00:12:27.060 --rc geninfo_all_blocks=1 00:12:27.060 --rc geninfo_unexecuted_blocks=1 00:12:27.060 00:12:27.060 ' 00:12:27.060 12:43:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.060 12:43:26 thread -- common/autotest_common.sh@10 -- # set +x 00:12:27.060 ************************************ 00:12:27.060 START TEST thread_poller_perf 00:12:27.060 ************************************ 00:12:27.060 12:43:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:27.318 [2024-12-05 12:43:26.926643] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:27.318 [2024-12-05 12:43:26.927293] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71706 ] 00:12:27.318 [2024-12-05 12:43:27.088026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.318 [2024-12-05 12:43:27.113922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.318 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:28.689 [2024-12-05T12:43:28.541Z] ====================================== 00:12:28.689 [2024-12-05T12:43:28.541Z] busy:2611561326 (cyc) 00:12:28.689 [2024-12-05T12:43:28.541Z] total_run_count: 305000 00:12:28.689 [2024-12-05T12:43:28.541Z] tsc_hz: 2600000000 (cyc) 00:12:28.689 [2024-12-05T12:43:28.541Z] ====================================== 00:12:28.689 [2024-12-05T12:43:28.541Z] poller_cost: 8562 (cyc), 3293 (nsec) 00:12:28.689 00:12:28.689 real 0m1.285s 00:12:28.689 user 0m1.096s 00:12:28.689 sys 0m0.076s 00:12:28.689 12:43:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.689 12:43:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:28.689 ************************************ 00:12:28.689 END TEST thread_poller_perf 00:12:28.689 ************************************ 00:12:28.689 12:43:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:28.689 12:43:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:28.689 12:43:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.689 12:43:28 thread -- common/autotest_common.sh@10 -- # set +x 00:12:28.689 ************************************ 00:12:28.689 START TEST thread_poller_perf 00:12:28.689 ************************************ 00:12:28.689 12:43:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:28.689 [2024-12-05 12:43:28.266229] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:28.689 [2024-12-05 12:43:28.266358] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71742 ] 00:12:28.689 [2024-12-05 12:43:28.422273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.689 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:28.689 [2024-12-05 12:43:28.447417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.078 [2024-12-05T12:43:29.930Z] ====================================== 00:12:30.078 [2024-12-05T12:43:29.930Z] busy:2603415034 (cyc) 00:12:30.078 [2024-12-05T12:43:29.930Z] total_run_count: 3963000 00:12:30.078 [2024-12-05T12:43:29.930Z] tsc_hz: 2600000000 (cyc) 00:12:30.078 [2024-12-05T12:43:29.930Z] ====================================== 00:12:30.078 [2024-12-05T12:43:29.930Z] poller_cost: 656 (cyc), 252 (nsec) 00:12:30.078 00:12:30.078 real 0m1.260s 00:12:30.078 user 0m1.093s 00:12:30.078 sys 0m0.060s 00:12:30.078 12:43:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.078 12:43:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:30.078 ************************************ 00:12:30.078 END TEST thread_poller_perf 00:12:30.078 ************************************ 00:12:30.078 12:43:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:30.078 ************************************ 00:12:30.078 END TEST thread 00:12:30.078 ************************************ 00:12:30.078 00:12:30.078 real 0m2.812s 00:12:30.078 user 0m2.298s 00:12:30.078 sys 0m0.261s 00:12:30.078 12:43:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.078 12:43:29 thread -- common/autotest_common.sh@10 -- # set +x 00:12:30.078 12:43:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:30.078 12:43:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:30.078 12:43:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:30.078 12:43:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.078 12:43:29 -- common/autotest_common.sh@10 -- # set +x 00:12:30.078 ************************************ 00:12:30.078 START TEST app_cmdline 00:12:30.078 ************************************ 00:12:30.078 12:43:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:30.078 * Looking for test storage... 00:12:30.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:30.078 12:43:29 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:30.078 12:43:29 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:12:30.078 12:43:29 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:30.078 12:43:29 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.078 12:43:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:30.078 12:43:29 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.078 12:43:29 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:30.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.078 --rc genhtml_branch_coverage=1 00:12:30.078 --rc genhtml_function_coverage=1 00:12:30.078 --rc genhtml_legend=1 00:12:30.078 --rc geninfo_all_blocks=1 00:12:30.078 --rc geninfo_unexecuted_blocks=1 00:12:30.078 00:12:30.078 ' 00:12:30.078 12:43:29 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:30.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.078 --rc genhtml_branch_coverage=1 00:12:30.078 --rc genhtml_function_coverage=1 00:12:30.078 --rc genhtml_legend=1 00:12:30.078 --rc geninfo_all_blocks=1 00:12:30.078 --rc geninfo_unexecuted_blocks=1 00:12:30.079 00:12:30.079 ' 00:12:30.079 12:43:29 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.079 --rc genhtml_branch_coverage=1 00:12:30.079 --rc genhtml_function_coverage=1 00:12:30.079 --rc genhtml_legend=1 00:12:30.079 --rc geninfo_all_blocks=1 00:12:30.079 --rc geninfo_unexecuted_blocks=1 00:12:30.079 00:12:30.079 ' 00:12:30.079 12:43:29 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:30.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.079 --rc genhtml_branch_coverage=1 00:12:30.079 --rc genhtml_function_coverage=1 00:12:30.079 --rc genhtml_legend=1 00:12:30.079 --rc geninfo_all_blocks=1 00:12:30.079 --rc geninfo_unexecuted_blocks=1 00:12:30.079 00:12:30.079 ' 00:12:30.079 12:43:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:30.079 12:43:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71826 00:12:30.079 12:43:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71826 00:12:30.079 12:43:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71826 ']' 00:12:30.079 12:43:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.079 12:43:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.079 12:43:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.079 12:43:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.079 12:43:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:30.079 12:43:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:30.079 [2024-12-05 12:43:29.797103] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:30.079 [2024-12-05 12:43:29.797223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71826 ] 00:12:30.336 [2024-12-05 12:43:29.949515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.336 [2024-12-05 12:43:29.975862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.899 12:43:30 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:30.899 12:43:30 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:12:30.899 12:43:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:31.156 { 00:12:31.156 "version": "SPDK v25.01-pre git sha1 8d3947977", 00:12:31.156 "fields": { 00:12:31.156 "major": 25, 00:12:31.156 "minor": 1, 00:12:31.156 "patch": 0, 00:12:31.156 "suffix": "-pre", 00:12:31.156 "commit": "8d3947977" 00:12:31.156 } 00:12:31.156 } 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:31.156 12:43:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:31.156 12:43:30 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:31.414 request: 00:12:31.414 { 00:12:31.414 "method": "env_dpdk_get_mem_stats", 00:12:31.414 "req_id": 1 00:12:31.414 } 00:12:31.414 Got JSON-RPC error response 00:12:31.414 response: 00:12:31.414 { 00:12:31.414 "code": -32601, 00:12:31.414 "message": "Method not found" 00:12:31.414 } 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:31.414 12:43:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71826 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71826 ']' 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71826 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71826 00:12:31.414 killing process with pid 71826 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71826' 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@973 -- # kill 71826 00:12:31.414 12:43:31 app_cmdline -- common/autotest_common.sh@978 -- # wait 71826 00:12:31.671 00:12:31.671 real 0m1.829s 00:12:31.671 user 0m2.119s 00:12:31.671 sys 0m0.439s 00:12:31.671 12:43:31 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.671 12:43:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:31.671 ************************************ 00:12:31.671 END TEST app_cmdline 00:12:31.671 ************************************ 00:12:31.671 12:43:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:31.671 12:43:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.671 12:43:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.671 12:43:31 -- common/autotest_common.sh@10 -- # set +x 00:12:31.671 ************************************ 00:12:31.671 START TEST version 00:12:31.671 ************************************ 00:12:31.671 12:43:31 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:31.929 * Looking for test storage... 00:12:31.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:31.929 12:43:31 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:31.929 12:43:31 version -- common/autotest_common.sh@1711 -- # lcov --version 00:12:31.929 12:43:31 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:31.929 12:43:31 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:31.929 12:43:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.929 12:43:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.930 12:43:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.930 12:43:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.930 12:43:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.930 12:43:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.930 12:43:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.930 12:43:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.930 12:43:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.930 12:43:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.930 12:43:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.930 12:43:31 version -- scripts/common.sh@344 -- # case "$op" in 00:12:31.930 12:43:31 version -- scripts/common.sh@345 -- # : 1 00:12:31.930 12:43:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.930 12:43:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.930 12:43:31 version -- scripts/common.sh@365 -- # decimal 1 00:12:31.930 12:43:31 version -- scripts/common.sh@353 -- # local d=1 00:12:31.930 12:43:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.930 12:43:31 version -- scripts/common.sh@355 -- # echo 1 00:12:31.930 12:43:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.930 12:43:31 version -- scripts/common.sh@366 -- # decimal 2 00:12:31.930 12:43:31 version -- scripts/common.sh@353 -- # local d=2 00:12:31.930 12:43:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.930 12:43:31 version -- scripts/common.sh@355 -- # echo 2 00:12:31.930 12:43:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.930 12:43:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.930 12:43:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.930 12:43:31 version -- scripts/common.sh@368 -- # return 0 00:12:31.930 12:43:31 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.930 12:43:31 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.930 --rc genhtml_branch_coverage=1 00:12:31.930 --rc genhtml_function_coverage=1 00:12:31.930 --rc genhtml_legend=1 00:12:31.930 --rc geninfo_all_blocks=1 00:12:31.930 --rc geninfo_unexecuted_blocks=1 00:12:31.930 00:12:31.930 ' 00:12:31.930 12:43:31 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.930 --rc genhtml_branch_coverage=1 00:12:31.930 --rc genhtml_function_coverage=1 00:12:31.930 --rc genhtml_legend=1 00:12:31.930 --rc geninfo_all_blocks=1 00:12:31.930 --rc geninfo_unexecuted_blocks=1 00:12:31.930 00:12:31.930 ' 00:12:31.930 12:43:31 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.930 --rc genhtml_branch_coverage=1 00:12:31.930 --rc genhtml_function_coverage=1 00:12:31.930 --rc genhtml_legend=1 00:12:31.930 --rc geninfo_all_blocks=1 00:12:31.930 --rc geninfo_unexecuted_blocks=1 00:12:31.930 00:12:31.930 ' 00:12:31.930 12:43:31 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:31.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.930 --rc genhtml_branch_coverage=1 00:12:31.930 --rc genhtml_function_coverage=1 00:12:31.930 --rc genhtml_legend=1 00:12:31.930 --rc geninfo_all_blocks=1 00:12:31.930 --rc geninfo_unexecuted_blocks=1 00:12:31.930 00:12:31.930 ' 00:12:31.930 12:43:31 version -- app/version.sh@17 -- # get_header_version major 00:12:31.930 12:43:31 version -- app/version.sh@14 -- # cut -f2 00:12:31.930 12:43:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:31.930 12:43:31 version -- app/version.sh@14 -- # tr -d '"' 00:12:31.930 12:43:31 version -- app/version.sh@17 -- # major=25 00:12:31.930 12:43:31 version -- app/version.sh@18 -- # get_header_version minor 00:12:31.930 12:43:31 version -- app/version.sh@14 -- # cut -f2 00:12:31.930 12:43:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:31.930 12:43:31 version -- app/version.sh@14 -- # tr -d '"' 00:12:31.930 12:43:31 version -- app/version.sh@18 -- # minor=1 00:12:31.930 12:43:31 version -- app/version.sh@19 -- # get_header_version patch 00:12:31.930 12:43:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:31.930 12:43:31 version -- app/version.sh@14 -- # cut -f2 00:12:31.930 12:43:31 version -- app/version.sh@14 -- # tr -d '"' 00:12:31.930 12:43:31 version -- app/version.sh@19 -- # patch=0 00:12:31.930 12:43:31 version -- app/version.sh@20 -- # get_header_version suffix 00:12:31.930 12:43:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:31.930 12:43:31 version -- app/version.sh@14 -- # cut -f2 00:12:31.930 12:43:31 version -- app/version.sh@14 -- # tr -d '"' 00:12:31.930 12:43:31 version -- app/version.sh@20 -- # suffix=-pre 00:12:31.930 12:43:31 version -- app/version.sh@22 -- # version=25.1 00:12:31.930 12:43:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:31.930 12:43:31 version -- app/version.sh@28 -- # version=25.1rc0 00:12:31.930 12:43:31 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:31.930 12:43:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:31.930 12:43:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:31.930 12:43:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:31.930 00:12:31.930 real 0m0.214s 00:12:31.930 user 0m0.145s 00:12:31.930 sys 0m0.093s 00:12:31.930 ************************************ 00:12:31.930 END TEST version 00:12:31.930 ************************************ 00:12:31.930 12:43:31 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.930 12:43:31 version -- common/autotest_common.sh@10 -- # set +x 00:12:31.930 12:43:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:31.930 12:43:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:12:31.930 12:43:31 -- spdk/autotest.sh@194 -- # uname -s 00:12:31.930 12:43:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:12:31.930 12:43:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:31.930 12:43:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:12:31.930 12:43:31 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:12:31.930 12:43:31 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:31.930 12:43:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.930 12:43:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.930 12:43:31 -- common/autotest_common.sh@10 -- # set +x 00:12:31.930 ************************************ 00:12:31.930 START TEST blockdev_nvme 00:12:31.930 ************************************ 00:12:31.930 12:43:31 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:32.189 * Looking for test storage... 00:12:32.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:32.189 12:43:31 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:32.189 12:43:31 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:32.189 12:43:31 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:32.189 12:43:31 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.189 12:43:31 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:12:32.189 12:43:31 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.189 12:43:31 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:32.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.189 --rc genhtml_branch_coverage=1 00:12:32.189 --rc genhtml_function_coverage=1 00:12:32.189 --rc genhtml_legend=1 00:12:32.189 --rc geninfo_all_blocks=1 00:12:32.189 --rc geninfo_unexecuted_blocks=1 00:12:32.189 00:12:32.189 ' 00:12:32.189 12:43:31 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:32.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.189 --rc genhtml_branch_coverage=1 00:12:32.189 --rc genhtml_function_coverage=1 00:12:32.189 --rc genhtml_legend=1 00:12:32.189 --rc geninfo_all_blocks=1 00:12:32.189 --rc geninfo_unexecuted_blocks=1 00:12:32.189 00:12:32.189 ' 00:12:32.189 12:43:31 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:32.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.189 --rc genhtml_branch_coverage=1 00:12:32.189 --rc genhtml_function_coverage=1 00:12:32.189 --rc genhtml_legend=1 00:12:32.189 --rc geninfo_all_blocks=1 00:12:32.189 --rc geninfo_unexecuted_blocks=1 00:12:32.190 00:12:32.190 ' 00:12:32.190 12:43:31 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.190 --rc genhtml_branch_coverage=1 00:12:32.190 --rc genhtml_function_coverage=1 00:12:32.190 --rc genhtml_legend=1 00:12:32.190 --rc geninfo_all_blocks=1 00:12:32.190 --rc geninfo_unexecuted_blocks=1 00:12:32.190 00:12:32.190 ' 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:32.190 12:43:31 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71987 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71987 00:12:32.190 12:43:31 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71987 ']' 00:12:32.190 12:43:31 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.190 12:43:31 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:32.190 12:43:31 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.190 12:43:31 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.190 12:43:31 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.190 12:43:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.190 [2024-12-05 12:43:31.970749] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:32.190 [2024-12-05 12:43:31.971084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71987 ] 00:12:32.448 [2024-12-05 12:43:32.128584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.448 [2024-12-05 12:43:32.156670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.012 12:43:32 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.012 12:43:32 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:12:33.012 12:43:32 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:12:33.012 12:43:32 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:12:33.012 12:43:32 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:12:33.012 12:43:32 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:12:33.013 12:43:32 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:33.013 12:43:32 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:12:33.013 12:43:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.013 12:43:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8e13505b-076a-4c74-8052-9caf99cc0c84"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8e13505b-076a-4c74-8052-9caf99cc0c84",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a25eb196-d9d1-4dca-a67c-d3f915a5d81d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a25eb196-d9d1-4dca-a67c-d3f915a5d81d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "39270da9-d70f-4855-89ed-631dd2190f20"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "39270da9-d70f-4855-89ed-631dd2190f20",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ddd980a1-9ca5-40ac-b170-b9e66d4cca6e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ddd980a1-9ca5-40ac-b170-b9e66d4cca6e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c488df4f-a841-45b5-a13c-0afe60da8527"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c488df4f-a841-45b5-a13c-0afe60da8527",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4771556d-abc5-4348-9baa-09e5065002a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4771556d-abc5-4348-9baa-09e5065002a2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:12:33.578 12:43:33 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71987 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71987 ']' 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71987 00:12:33.578 12:43:33 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:12:33.579 12:43:33 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.579 12:43:33 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71987 00:12:33.579 killing process with pid 71987 00:12:33.579 12:43:33 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.579 12:43:33 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.579 12:43:33 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71987' 00:12:33.579 12:43:33 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71987 00:12:33.579 12:43:33 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71987 00:12:33.837 12:43:33 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:33.837 12:43:33 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:33.837 12:43:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:33.837 12:43:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.837 12:43:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:33.837 ************************************ 00:12:33.837 START TEST bdev_hello_world 00:12:33.837 ************************************ 00:12:33.837 12:43:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:34.095 [2024-12-05 12:43:33.755954] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:34.095 [2024-12-05 12:43:33.756604] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72054 ] 00:12:34.095 [2024-12-05 12:43:33.924547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.352 [2024-12-05 12:43:33.952258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.610 [2024-12-05 12:43:34.345751] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:34.610 [2024-12-05 12:43:34.345830] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:34.610 [2024-12-05 12:43:34.345852] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:34.610 [2024-12-05 12:43:34.348149] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:34.610 [2024-12-05 12:43:34.348965] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:34.610 [2024-12-05 12:43:34.348993] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:34.610 [2024-12-05 12:43:34.349501] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:34.610 00:12:34.610 [2024-12-05 12:43:34.349531] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:34.869 00:12:34.869 real 0m0.866s 00:12:34.869 user 0m0.573s 00:12:34.869 sys 0m0.188s 00:12:34.869 12:43:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.869 ************************************ 00:12:34.869 END TEST bdev_hello_world 00:12:34.869 ************************************ 00:12:34.869 12:43:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 12:43:34 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:12:34.869 12:43:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:34.869 12:43:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.869 12:43:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 ************************************ 00:12:34.869 START TEST bdev_bounds 00:12:34.869 ************************************ 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:34.869 Process bdevio pid: 72085 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72085 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72085' 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72085 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72085 ']' 00:12:34.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 12:43:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:34.869 [2024-12-05 12:43:34.674038] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:34.869 [2024-12-05 12:43:34.674245] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72085 ] 00:12:35.127 [2024-12-05 12:43:34.838964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.127 [2024-12-05 12:43:34.869114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.127 [2024-12-05 12:43:34.869478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.127 [2024-12-05 12:43:34.869528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.692 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.692 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:35.692 12:43:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:35.949 I/O targets: 00:12:35.949 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:35.949 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:35.949 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:35.949 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:35.949 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:35.949 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:35.949 00:12:35.949 00:12:35.949 CUnit - A unit testing framework for C - Version 2.1-3 00:12:35.949 http://cunit.sourceforge.net/ 00:12:35.949 00:12:35.949 00:12:35.949 Suite: bdevio tests on: Nvme3n1 00:12:35.949 Test: blockdev write read block ...passed 00:12:35.949 Test: blockdev write zeroes read block ...passed 00:12:35.949 Test: blockdev write zeroes read no split ...passed 00:12:35.949 Test: blockdev write zeroes read split ...passed 00:12:35.949 Test: blockdev write zeroes read split partial ...passed 00:12:35.949 Test: blockdev reset ...[2024-12-05 12:43:35.634437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:12:35.949 [2024-12-05 12:43:35.636782] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:12:35.949 passed 00:12:35.949 Test: blockdev write read 8 blocks ...passed 00:12:35.949 Test: blockdev write read size > 128k ...passed 00:12:35.949 Test: blockdev write read invalid size ...passed 00:12:35.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:35.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:35.949 Test: blockdev write read max offset ...passed 00:12:35.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:35.949 Test: blockdev writev readv 8 blocks ...passed 00:12:35.949 Test: blockdev writev readv 30 x 1block ...passed 00:12:35.949 Test: blockdev writev readv block ...passed 00:12:35.949 Test: blockdev writev readv size > 128k ...passed 00:12:35.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:35.949 Test: blockdev comparev and writev ...passed 00:12:35.949 Test: blockdev nvme passthru rw ...[2024-12-05 12:43:35.647626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d280e000 len:0x1000 00:12:35.949 [2024-12-05 12:43:35.647701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:35.949 passed 00:12:35.949 Test: blockdev nvme passthru vendor specific ...passed 00:12:35.949 Test: blockdev nvme admin passthru ...[2024-12-05 12:43:35.648395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:35.949 [2024-12-05 12:43:35.648438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:35.949 passed 00:12:35.949 Test: blockdev copy ...passed 00:12:35.949 Suite: bdevio tests on: Nvme2n3 00:12:35.949 Test: blockdev write read block ...passed 00:12:35.949 Test: blockdev write zeroes read block ...passed 00:12:35.949 Test: blockdev write zeroes read no split ...passed 00:12:35.949 Test: blockdev write zeroes read split ...passed 00:12:35.949 Test: blockdev write zeroes read split partial ...passed 00:12:35.949 Test: blockdev reset ...[2024-12-05 12:43:35.676848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:35.949 passed 00:12:35.949 Test: blockdev write read 8 blocks ...[2024-12-05 12:43:35.682087] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:35.949 passed 00:12:35.949 Test: blockdev write read size > 128k ...passed 00:12:35.949 Test: blockdev write read invalid size ...passed 00:12:35.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:35.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:35.949 Test: blockdev write read max offset ...passed 00:12:35.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:35.949 Test: blockdev writev readv 8 blocks ...passed 00:12:35.949 Test: blockdev writev readv 30 x 1block ...passed 00:12:35.949 Test: blockdev writev readv block ...passed 00:12:35.949 Test: blockdev writev readv size > 128k ...passed 00:12:35.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:35.949 Test: blockdev comparev and writev ...[2024-12-05 12:43:35.698651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2806000 len:0x1000 00:12:35.949 [2024-12-05 12:43:35.698699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:35.949 passed 00:12:35.949 Test: blockdev nvme passthru rw ...passed 00:12:35.949 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:43:35.700663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:35.949 passed 00:12:35.949 Test: blockdev nvme admin passthru ...[2024-12-05 12:43:35.700690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:35.949 passed 00:12:35.949 Test: blockdev copy ...passed 00:12:35.949 Suite: bdevio tests on: Nvme2n2 00:12:35.949 Test: blockdev write read block ...passed 00:12:35.949 Test: blockdev write zeroes read block ...passed 00:12:35.949 Test: blockdev write zeroes read no split ...passed 00:12:35.949 Test: blockdev write zeroes read split ...passed 00:12:35.949 Test: blockdev write zeroes read split partial ...passed 00:12:35.949 Test: blockdev reset ...[2024-12-05 12:43:35.726247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:35.949 passed 00:12:35.950 Test: blockdev write read 8 blocks ...[2024-12-05 12:43:35.730387] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:35.950 passed 00:12:35.950 Test: blockdev write read size > 128k ...passed 00:12:35.950 Test: blockdev write read invalid size ...passed 00:12:35.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:35.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:35.950 Test: blockdev write read max offset ...passed 00:12:35.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:35.950 Test: blockdev writev readv 8 blocks ...passed 00:12:35.950 Test: blockdev writev readv 30 x 1block ...passed 00:12:35.950 Test: blockdev writev readv block ...passed 00:12:35.950 Test: blockdev writev readv size > 128k ...passed 00:12:35.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:35.950 Test: blockdev comparev and writev ...[2024-12-05 12:43:35.748271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2808000 len:0x1000 00:12:35.950 [2024-12-05 12:43:35.748397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:35.950 passed 00:12:35.950 Test: blockdev nvme passthru rw ...passed 00:12:35.950 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:43:35.751191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:35.950 [2024-12-05 12:43:35.751271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:35.950 passed 00:12:35.950 Test: blockdev nvme admin passthru ...passed 00:12:35.950 Test: blockdev copy ...passed 00:12:35.950 Suite: bdevio tests on: Nvme2n1 00:12:35.950 Test: blockdev write read block ...passed 00:12:35.950 Test: blockdev write zeroes read block ...passed 00:12:35.950 Test: blockdev write zeroes read no split ...passed 00:12:35.950 Test: blockdev write zeroes read split ...passed 00:12:35.950 Test: blockdev write zeroes read split partial ...passed 00:12:35.950 Test: blockdev reset ...[2024-12-05 12:43:35.779285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:12:35.950 [2024-12-05 12:43:35.781864] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:12:35.950 passed 00:12:35.950 Test: blockdev write read 8 blocks ...passed 00:12:35.950 Test: blockdev write read size > 128k ...passed 00:12:35.950 Test: blockdev write read invalid size ...passed 00:12:35.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:35.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:35.950 Test: blockdev write read max offset ...passed 00:12:35.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:35.950 Test: blockdev writev readv 8 blocks ...passed 00:12:35.950 Test: blockdev writev readv 30 x 1block ...passed 00:12:35.950 Test: blockdev writev readv block ...passed 00:12:35.950 Test: blockdev writev readv size > 128k ...passed 00:12:35.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:35.950 Test: blockdev comparev and writev ...[2024-12-05 12:43:35.797859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2404000 len:0x1000 00:12:35.950 [2024-12-05 12:43:35.797901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:36.207 passed 00:12:36.207 Test: blockdev nvme passthru rw ...passed 00:12:36.207 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:43:35.800902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:36.207 [2024-12-05 12:43:35.800933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:36.207 passed 00:12:36.207 Test: blockdev nvme admin passthru ...passed 00:12:36.207 Test: blockdev copy ...passed 00:12:36.207 Suite: bdevio tests on: Nvme1n1 00:12:36.207 Test: blockdev write read block ...passed 00:12:36.207 Test: blockdev write zeroes read block ...passed 00:12:36.207 Test: blockdev write zeroes read no split ...passed 00:12:36.207 Test: blockdev write zeroes read split ...passed 00:12:36.207 Test: blockdev write zeroes read split partial ...passed 00:12:36.207 Test: blockdev reset ...[2024-12-05 12:43:35.829826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:12:36.207 passed 00:12:36.207 Test: blockdev write read 8 blocks ...[2024-12-05 12:43:35.833839] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:12:36.207 passed 00:12:36.207 Test: blockdev write read size > 128k ...passed 00:12:36.207 Test: blockdev write read invalid size ...passed 00:12:36.207 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.207 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.207 Test: blockdev write read max offset ...passed 00:12:36.207 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.207 Test: blockdev writev readv 8 blocks ...passed 00:12:36.207 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.207 Test: blockdev writev readv block ...passed 00:12:36.207 Test: blockdev writev readv size > 128k ...passed 00:12:36.207 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.207 Test: blockdev comparev and writev ...[2024-12-05 12:43:35.850513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2eac3d000 len:0x1000 00:12:36.207 [2024-12-05 12:43:35.850570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:36.207 passed 00:12:36.207 Test: blockdev nvme passthru rw ...passed 00:12:36.207 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:43:35.852950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:36.207 [2024-12-05 12:43:35.852985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:36.207 passed 00:12:36.207 Test: blockdev nvme admin passthru ...passed 00:12:36.207 Test: blockdev copy ...passed 00:12:36.207 Suite: bdevio tests on: Nvme0n1 00:12:36.207 Test: blockdev write read block ...passed 00:12:36.207 Test: blockdev write zeroes read block ...passed 00:12:36.207 Test: blockdev write zeroes read no split ...passed 00:12:36.207 Test: blockdev write zeroes read split ...passed 00:12:36.207 Test: blockdev write zeroes read split partial ...passed 00:12:36.207 Test: blockdev reset ...[2024-12-05 12:43:35.882691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:36.207 [2024-12-05 12:43:35.886654] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:36.207 passed 00:12:36.208 Test: blockdev write read 8 blocks ...passed 00:12:36.208 Test: blockdev write read size > 128k ...passed 00:12:36.208 Test: blockdev write read invalid size ...passed 00:12:36.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:36.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:36.208 Test: blockdev write read max offset ...passed 00:12:36.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:36.208 Test: blockdev writev readv 8 blocks ...passed 00:12:36.208 Test: blockdev writev readv 30 x 1block ...passed 00:12:36.208 Test: blockdev writev readv block ...passed 00:12:36.208 Test: blockdev writev readv size > 128k ...passed 00:12:36.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:36.208 Test: blockdev comparev and writev ...passed 00:12:36.208 Test: blockdev nvme passthru rw ...[2024-12-05 12:43:35.901483] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:12:36.208 separate metadata which is not supported yet. 00:12:36.208 passed 00:12:36.208 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:43:35.903132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:12:36.208 passed 00:12:36.208 Test: blockdev nvme admin passthru ...[2024-12-05 12:43:35.903181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:12:36.208 passed 00:12:36.208 Test: blockdev copy ...passed 00:12:36.208 00:12:36.208 Run Summary: Type Total Ran Passed Failed Inactive 00:12:36.208 suites 6 6 n/a 0 0 00:12:36.208 tests 138 138 138 0 0 00:12:36.208 asserts 893 893 893 0 n/a 00:12:36.208 00:12:36.208 Elapsed time = 0.644 seconds 00:12:36.208 0 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72085 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72085 ']' 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72085 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72085 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72085' 00:12:36.208 killing process with pid 72085 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72085 00:12:36.208 12:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72085 00:12:36.465 12:43:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:36.465 00:12:36.465 real 0m1.557s 00:12:36.465 user 0m3.806s 00:12:36.465 sys 0m0.341s 00:12:36.465 12:43:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.465 12:43:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:36.465 ************************************ 00:12:36.465 END TEST bdev_bounds 00:12:36.465 ************************************ 00:12:36.465 12:43:36 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:36.465 12:43:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:36.465 12:43:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.465 12:43:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:36.465 ************************************ 00:12:36.465 START TEST bdev_nbd 00:12:36.465 ************************************ 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:36.465 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72134 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72134 /var/tmp/spdk-nbd.sock 00:12:36.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72134 ']' 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.466 12:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:36.466 [2024-12-05 12:43:36.286093] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:36.466 [2024-12-05 12:43:36.286229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.722 [2024-12-05 12:43:36.452332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.722 [2024-12-05 12:43:36.480143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:37.287 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.853 1+0 records in 00:12:37.853 1+0 records out 00:12:37.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548442 s, 7.5 MB/s 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:37.853 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:12:38.110 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.111 1+0 records in 00:12:38.111 1+0 records out 00:12:38.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00164496 s, 2.5 MB/s 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:38.111 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.370 12:43:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.370 1+0 records in 00:12:38.370 1+0 records out 00:12:38.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000982376 s, 4.2 MB/s 00:12:38.370 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.370 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.370 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.370 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.370 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.370 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.370 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:38.370 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.627 1+0 records in 00:12:38.627 1+0 records out 00:12:38.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000961234 s, 4.3 MB/s 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:38.627 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.893 1+0 records in 00:12:38.893 1+0 records out 00:12:38.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000810831 s, 5.1 MB/s 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:38.893 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.152 1+0 records in 00:12:39.152 1+0 records out 00:12:39.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000966923 s, 4.2 MB/s 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:39.152 12:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd0", 00:12:39.410 "bdev_name": "Nvme0n1" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd1", 00:12:39.410 "bdev_name": "Nvme1n1" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd2", 00:12:39.410 "bdev_name": "Nvme2n1" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd3", 00:12:39.410 "bdev_name": "Nvme2n2" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd4", 00:12:39.410 "bdev_name": "Nvme2n3" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd5", 00:12:39.410 "bdev_name": "Nvme3n1" 00:12:39.410 } 00:12:39.410 ]' 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd0", 00:12:39.410 "bdev_name": "Nvme0n1" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd1", 00:12:39.410 "bdev_name": "Nvme1n1" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd2", 00:12:39.410 "bdev_name": "Nvme2n1" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd3", 00:12:39.410 "bdev_name": "Nvme2n2" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd4", 00:12:39.410 "bdev_name": "Nvme2n3" 00:12:39.410 }, 00:12:39.410 { 00:12:39.410 "nbd_device": "/dev/nbd5", 00:12:39.410 "bdev_name": "Nvme3n1" 00:12:39.410 } 00:12:39.410 ]' 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.410 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.411 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:39.411 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.411 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.411 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.668 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.927 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.186 12:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.443 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.700 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:40.958 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:12:41.216 /dev/nbd0 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.216 1+0 records in 00:12:41.216 1+0 records out 00:12:41.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594776 s, 6.9 MB/s 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.216 12:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:12:41.474 /dev/nbd1 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.474 1+0 records in 00:12:41.474 1+0 records out 00:12:41.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154224 s, 2.7 MB/s 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.474 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:12:41.734 /dev/nbd10 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.734 1+0 records in 00:12:41.734 1+0 records out 00:12:41.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924639 s, 4.4 MB/s 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.734 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:12:41.734 /dev/nbd11 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.995 1+0 records in 00:12:41.995 1+0 records out 00:12:41.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00366538 s, 1.1 MB/s 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:12:41.995 /dev/nbd12 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:41.995 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.996 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.996 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:41.996 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:41.996 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.996 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.996 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.996 1+0 records in 00:12:41.996 1+0 records out 00:12:41.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882053 s, 4.6 MB/s 00:12:42.254 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.254 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:42.254 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.254 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.254 12:43:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:42.254 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.254 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:42.254 12:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:12:42.254 /dev/nbd13 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.254 1+0 records in 00:12:42.254 1+0 records out 00:12:42.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404697 s, 10.1 MB/s 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.254 12:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:42.255 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.255 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:42.255 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:42.255 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.255 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd0", 00:12:42.513 "bdev_name": "Nvme0n1" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd1", 00:12:42.513 "bdev_name": "Nvme1n1" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd10", 00:12:42.513 "bdev_name": "Nvme2n1" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd11", 00:12:42.513 "bdev_name": "Nvme2n2" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd12", 00:12:42.513 "bdev_name": "Nvme2n3" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd13", 00:12:42.513 "bdev_name": "Nvme3n1" 00:12:42.513 } 00:12:42.513 ]' 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd0", 00:12:42.513 "bdev_name": "Nvme0n1" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd1", 00:12:42.513 "bdev_name": "Nvme1n1" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd10", 00:12:42.513 "bdev_name": "Nvme2n1" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd11", 00:12:42.513 "bdev_name": "Nvme2n2" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd12", 00:12:42.513 "bdev_name": "Nvme2n3" 00:12:42.513 }, 00:12:42.513 { 00:12:42.513 "nbd_device": "/dev/nbd13", 00:12:42.513 "bdev_name": "Nvme3n1" 00:12:42.513 } 00:12:42.513 ]' 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:42.513 /dev/nbd1 00:12:42.513 /dev/nbd10 00:12:42.513 /dev/nbd11 00:12:42.513 /dev/nbd12 00:12:42.513 /dev/nbd13' 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:42.513 /dev/nbd1 00:12:42.513 /dev/nbd10 00:12:42.513 /dev/nbd11 00:12:42.513 /dev/nbd12 00:12:42.513 /dev/nbd13' 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:42.513 256+0 records in 00:12:42.513 256+0 records out 00:12:42.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00956377 s, 110 MB/s 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.513 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:42.771 256+0 records in 00:12:42.771 256+0 records out 00:12:42.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0820204 s, 12.8 MB/s 00:12:42.771 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.771 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:42.771 256+0 records in 00:12:42.771 256+0 records out 00:12:42.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644131 s, 16.3 MB/s 00:12:42.771 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.771 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:42.771 256+0 records in 00:12:42.771 256+0 records out 00:12:42.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0630803 s, 16.6 MB/s 00:12:42.771 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.771 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:43.029 256+0 records in 00:12:43.029 256+0 records out 00:12:43.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0638702 s, 16.4 MB/s 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:43.029 256+0 records in 00:12:43.029 256+0 records out 00:12:43.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0623625 s, 16.8 MB/s 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:43.029 256+0 records in 00:12:43.029 256+0 records out 00:12:43.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644104 s, 16.3 MB/s 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.029 12:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.287 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.544 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.802 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.058 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.439 12:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.439 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:44.700 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:44.958 malloc_lvol_verify 00:12:44.958 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:45.235 5288c064-5b83-47ca-b708-89201053e64d 00:12:45.235 12:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:45.492 65e08ba1-9342-4ee7-8d08-2c5a8a7f65cb 00:12:45.492 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:45.492 /dev/nbd0 00:12:45.492 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:45.492 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:45.492 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:45.492 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:45.492 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:45.492 mke2fs 1.47.0 (5-Feb-2023) 00:12:45.492 Discarding device blocks: 0/4096 done 00:12:45.492 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:45.492 00:12:45.492 Allocating group tables: 0/1 done 00:12:45.749 Writing inode tables: 0/1 done 00:12:45.749 Creating journal (1024 blocks): done 00:12:45.749 Writing superblocks and filesystem accounting information: 0/1 done 00:12:45.749 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.749 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72134 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72134 ']' 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72134 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.750 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72134 00:12:46.006 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.006 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.006 killing process with pid 72134 00:12:46.006 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72134' 00:12:46.006 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72134 00:12:46.006 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72134 00:12:46.006 12:43:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:46.006 00:12:46.006 real 0m9.577s 00:12:46.006 user 0m14.134s 00:12:46.006 sys 0m3.229s 00:12:46.006 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.006 12:43:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:46.006 ************************************ 00:12:46.006 END TEST bdev_nbd 00:12:46.006 ************************************ 00:12:46.006 12:43:45 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:12:46.006 12:43:45 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:12:46.006 skipping fio tests on NVMe due to multi-ns failures. 00:12:46.006 12:43:45 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:46.006 12:43:45 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:46.006 12:43:45 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:46.006 12:43:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:46.006 12:43:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.006 12:43:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.006 ************************************ 00:12:46.006 START TEST bdev_verify 00:12:46.006 ************************************ 00:12:46.006 12:43:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:46.262 [2024-12-05 12:43:45.896996] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:46.262 [2024-12-05 12:43:45.897148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72508 ] 00:12:46.262 [2024-12-05 12:43:46.055447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:46.262 [2024-12-05 12:43:46.084371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.262 [2024-12-05 12:43:46.084424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.827 Running I/O for 5 seconds... 00:12:48.865 21504.00 IOPS, 84.00 MiB/s [2024-12-05T12:43:50.098Z] 21056.00 IOPS, 82.25 MiB/s [2024-12-05T12:43:51.040Z] 20778.67 IOPS, 81.17 MiB/s [2024-12-05T12:43:51.611Z] 20160.00 IOPS, 78.75 MiB/s [2024-12-05T12:43:51.611Z] 20070.40 IOPS, 78.40 MiB/s 00:12:51.759 Latency(us) 00:12:51.759 [2024-12-05T12:43:51.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.759 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x0 length 0xbd0bd 00:12:51.759 Nvme0n1 : 5.04 1675.57 6.55 0.00 0.00 76111.39 11241.94 72190.42 00:12:51.759 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:51.759 Nvme0n1 : 5.05 1623.45 6.34 0.00 0.00 78565.92 15022.87 75013.51 00:12:51.759 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x0 length 0xa0000 00:12:51.759 Nvme1n1 : 5.04 1675.09 6.54 0.00 0.00 76032.97 13107.20 66140.95 00:12:51.759 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0xa0000 length 0xa0000 00:12:51.759 Nvme1n1 : 5.05 1622.97 6.34 0.00 0.00 78420.87 16232.76 66140.95 00:12:51.759 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x0 length 0x80000 00:12:51.759 Nvme2n1 : 5.06 1681.40 6.57 0.00 0.00 75600.25 8166.79 64124.46 00:12:51.759 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x80000 length 0x80000 00:12:51.759 Nvme2n1 : 5.05 1622.50 6.34 0.00 0.00 78285.59 17644.31 62511.26 00:12:51.759 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x0 length 0x80000 00:12:51.759 Nvme2n2 : 5.06 1680.92 6.57 0.00 0.00 75461.80 8015.56 64527.75 00:12:51.759 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x80000 length 0x80000 00:12:51.759 Nvme2n2 : 5.07 1629.66 6.37 0.00 0.00 77781.55 5898.24 65334.35 00:12:51.759 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x0 length 0x80000 00:12:51.759 Nvme2n3 : 5.08 1689.57 6.60 0.00 0.00 75024.17 9427.10 68560.74 00:12:51.759 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x80000 length 0x80000 00:12:51.759 Nvme2n3 : 5.08 1638.94 6.40 0.00 0.00 77260.60 7813.91 68560.74 00:12:51.759 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x0 length 0x20000 00:12:51.759 Nvme3n1 : 5.08 1688.85 6.60 0.00 0.00 74880.37 8418.86 70173.93 00:12:51.759 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:51.759 Verification LBA range: start 0x20000 length 0x20000 00:12:51.759 Nvme3n1 : 5.08 1638.11 6.40 0.00 0.00 77129.51 8771.74 70173.93 00:12:51.759 [2024-12-05T12:43:51.611Z] =================================================================================================================== 00:12:51.759 [2024-12-05T12:43:51.611Z] Total : 19867.02 77.61 0.00 0.00 76690.59 5898.24 75013.51 00:12:53.141 00:12:53.141 real 0m6.846s 00:12:53.141 user 0m12.910s 00:12:53.141 sys 0m0.249s 00:12:53.141 ************************************ 00:12:53.141 12:43:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.141 12:43:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:53.141 END TEST bdev_verify 00:12:53.141 ************************************ 00:12:53.141 12:43:52 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:53.141 12:43:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:53.141 12:43:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.141 12:43:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.141 ************************************ 00:12:53.141 START TEST bdev_verify_big_io 00:12:53.141 ************************************ 00:12:53.141 12:43:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:53.141 [2024-12-05 12:43:52.819658] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:12:53.142 [2024-12-05 12:43:52.819826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72601 ] 00:12:53.142 [2024-12-05 12:43:52.979604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:53.402 [2024-12-05 12:43:53.011569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.402 [2024-12-05 12:43:53.011608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.663 Running I/O for 5 seconds... 00:12:58.810 2007.00 IOPS, 125.44 MiB/s [2024-12-05T12:43:59.605Z] 2566.00 IOPS, 160.38 MiB/s [2024-12-05T12:43:59.605Z] 3120.33 IOPS, 195.02 MiB/s 00:12:59.753 Latency(us) 00:12:59.753 [2024-12-05T12:43:59.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.753 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x0 length 0xbd0b 00:12:59.753 Nvme0n1 : 5.72 129.65 8.10 0.00 0.00 951617.89 22685.54 967916.31 00:12:59.753 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:59.753 Nvme0n1 : 5.76 120.83 7.55 0.00 0.00 1017362.81 16736.89 1568024.42 00:12:59.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x0 length 0xa000 00:12:59.753 Nvme1n1 : 5.72 129.95 8.12 0.00 0.00 925145.50 53638.70 871124.68 00:12:59.753 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0xa000 length 0xa000 00:12:59.753 Nvme1n1 : 5.69 126.00 7.88 0.00 0.00 949354.59 30650.68 1045349.61 00:12:59.753 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x0 length 0x8000 00:12:59.753 Nvme2n1 : 5.72 134.26 8.39 0.00 0.00 880721.92 75820.11 903388.55 00:12:59.753 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x8000 length 0x8000 00:12:59.753 Nvme2n1 : 5.84 129.18 8.07 0.00 0.00 889660.36 68157.44 1084066.26 00:12:59.753 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x0 length 0x8000 00:12:59.753 Nvme2n2 : 5.72 134.19 8.39 0.00 0.00 853204.15 76626.71 929199.66 00:12:59.753 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x8000 length 0x8000 00:12:59.753 Nvme2n2 : 5.84 127.41 7.96 0.00 0.00 882790.79 72997.02 1690627.15 00:12:59.753 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x0 length 0x8000 00:12:59.753 Nvme2n3 : 5.81 143.27 8.95 0.00 0.00 777487.81 37506.76 961463.53 00:12:59.753 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x8000 length 0x8000 00:12:59.753 Nvme2n3 : 5.89 139.13 8.70 0.00 0.00 789730.38 16333.59 1716438.25 00:12:59.753 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x0 length 0x2000 00:12:59.753 Nvme3n1 : 5.86 158.46 9.90 0.00 0.00 685680.11 4612.73 987274.63 00:12:59.753 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:59.753 Verification LBA range: start 0x2000 length 0x2000 00:12:59.753 Nvme3n1 : 5.92 159.81 9.99 0.00 0.00 670959.57 1329.62 1742249.35 00:12:59.753 [2024-12-05T12:43:59.605Z] =================================================================================================================== 00:12:59.753 [2024-12-05T12:43:59.605Z] Total : 1632.16 102.01 0.00 0.00 846573.64 1329.62 1742249.35 00:13:00.696 00:13:00.696 real 0m7.711s 00:13:00.696 user 0m14.504s 00:13:00.696 sys 0m0.313s 00:13:00.696 ************************************ 00:13:00.696 END TEST bdev_verify_big_io 00:13:00.696 ************************************ 00:13:00.696 12:44:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.696 12:44:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.696 12:44:00 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.696 12:44:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:00.696 12:44:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.696 12:44:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.696 ************************************ 00:13:00.696 START TEST bdev_write_zeroes 00:13:00.696 ************************************ 00:13:00.696 12:44:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.957 [2024-12-05 12:44:00.625315] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:00.957 [2024-12-05 12:44:00.625518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72701 ] 00:13:00.957 [2024-12-05 12:44:00.790797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.218 [2024-12-05 12:44:00.829554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.476 Running I/O for 1 seconds... 00:13:02.851 26224.00 IOPS, 102.44 MiB/s 00:13:02.851 Latency(us) 00:13:02.851 [2024-12-05T12:44:02.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.851 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.851 Nvme0n1 : 1.12 3777.90 14.76 0.00 0.00 33498.46 5873.03 253271.43 00:13:02.851 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.851 Nvme1n1 : 1.10 4254.11 16.62 0.00 0.00 29416.09 12250.19 200842.63 00:13:02.851 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.851 Nvme2n1 : 1.11 4220.57 16.49 0.00 0.00 29541.24 10939.47 200842.63 00:13:02.851 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.851 Nvme2n2 : 1.11 4154.75 16.23 0.00 0.00 30302.86 7612.26 200842.63 00:13:02.851 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.851 Nvme2n3 : 1.11 4147.60 16.20 0.00 0.00 30321.58 7561.85 200842.63 00:13:02.851 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:02.851 Nvme3n1 : 1.11 4084.65 15.96 0.00 0.00 30787.04 10586.58 200842.63 00:13:02.851 [2024-12-05T12:44:02.703Z] =================================================================================================================== 00:13:02.851 [2024-12-05T12:44:02.703Z] Total : 24639.58 96.25 0.00 0.00 30598.92 5873.03 253271.43 00:13:03.482 00:13:03.482 real 0m2.473s 00:13:03.482 user 0m2.034s 00:13:03.482 sys 0m0.319s 00:13:03.482 12:44:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.482 ************************************ 00:13:03.482 END TEST bdev_write_zeroes 00:13:03.482 ************************************ 00:13:03.482 12:44:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:03.482 12:44:03 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.482 12:44:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:03.482 12:44:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.482 12:44:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.482 ************************************ 00:13:03.482 START TEST bdev_json_nonenclosed 00:13:03.482 ************************************ 00:13:03.482 12:44:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.482 [2024-12-05 12:44:03.160186] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:03.482 [2024-12-05 12:44:03.160424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72743 ] 00:13:03.482 [2024-12-05 12:44:03.329247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.742 [2024-12-05 12:44:03.373761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.742 [2024-12-05 12:44:03.373935] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:03.742 [2024-12-05 12:44:03.373961] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:03.742 [2024-12-05 12:44:03.373979] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:03.742 00:13:03.742 real 0m0.412s 00:13:03.742 user 0m0.170s 00:13:03.742 sys 0m0.136s 00:13:03.742 12:44:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.742 12:44:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:03.742 ************************************ 00:13:03.742 END TEST bdev_json_nonenclosed 00:13:03.742 ************************************ 00:13:03.742 12:44:03 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:03.742 12:44:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:03.742 12:44:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.742 12:44:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.742 ************************************ 00:13:03.742 START TEST bdev_json_nonarray 00:13:03.742 ************************************ 00:13:03.742 12:44:03 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.000 [2024-12-05 12:44:03.631939] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:04.000 [2024-12-05 12:44:03.632117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72775 ] 00:13:04.000 [2024-12-05 12:44:03.800747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.261 [2024-12-05 12:44:03.851349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.261 [2024-12-05 12:44:03.851567] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:04.261 [2024-12-05 12:44:03.851594] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:04.261 [2024-12-05 12:44:03.851616] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:04.261 00:13:04.261 real 0m0.427s 00:13:04.261 user 0m0.180s 00:13:04.261 sys 0m0.141s 00:13:04.261 12:44:03 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.261 12:44:03 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:04.261 ************************************ 00:13:04.261 END TEST bdev_json_nonarray 00:13:04.261 ************************************ 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:13:04.261 12:44:04 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:13:04.261 00:13:04.261 real 0m32.288s 00:13:04.261 user 0m50.351s 00:13:04.261 sys 0m5.689s 00:13:04.261 12:44:04 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.261 ************************************ 00:13:04.261 END TEST blockdev_nvme 00:13:04.261 ************************************ 00:13:04.261 12:44:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.261 12:44:04 -- spdk/autotest.sh@209 -- # uname -s 00:13:04.261 12:44:04 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:13:04.261 12:44:04 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:04.261 12:44:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.261 12:44:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.261 12:44:04 -- common/autotest_common.sh@10 -- # set +x 00:13:04.261 ************************************ 00:13:04.261 START TEST blockdev_nvme_gpt 00:13:04.261 ************************************ 00:13:04.261 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:13:04.521 * Looking for test storage... 00:13:04.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.521 12:44:04 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:04.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.521 --rc genhtml_branch_coverage=1 00:13:04.521 --rc genhtml_function_coverage=1 00:13:04.521 --rc genhtml_legend=1 00:13:04.521 --rc geninfo_all_blocks=1 00:13:04.521 --rc geninfo_unexecuted_blocks=1 00:13:04.521 00:13:04.521 ' 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:04.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.521 --rc genhtml_branch_coverage=1 00:13:04.521 --rc genhtml_function_coverage=1 00:13:04.521 --rc genhtml_legend=1 00:13:04.521 --rc geninfo_all_blocks=1 00:13:04.521 --rc geninfo_unexecuted_blocks=1 00:13:04.521 00:13:04.521 ' 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:04.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.521 --rc genhtml_branch_coverage=1 00:13:04.521 --rc genhtml_function_coverage=1 00:13:04.521 --rc genhtml_legend=1 00:13:04.521 --rc geninfo_all_blocks=1 00:13:04.521 --rc geninfo_unexecuted_blocks=1 00:13:04.521 00:13:04.521 ' 00:13:04.521 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:04.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.521 --rc genhtml_branch_coverage=1 00:13:04.521 --rc genhtml_function_coverage=1 00:13:04.521 --rc genhtml_legend=1 00:13:04.521 --rc geninfo_all_blocks=1 00:13:04.521 --rc geninfo_unexecuted_blocks=1 00:13:04.521 00:13:04.521 ' 00:13:04.521 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:04.521 12:44:04 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:13:04.521 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:04.521 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72849 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:04.522 12:44:04 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72849 00:13:04.522 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72849 ']' 00:13:04.522 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.522 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.522 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.522 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.522 12:44:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:04.522 [2024-12-05 12:44:04.370538] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:04.781 [2024-12-05 12:44:04.370753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72849 ] 00:13:04.781 [2024-12-05 12:44:04.534916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.781 [2024-12-05 12:44:04.583563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.721 12:44:05 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.721 12:44:05 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:13:05.721 12:44:05 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:13:05.721 12:44:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:13:05.721 12:44:05 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:05.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:05.980 Waiting for block devices as requested 00:13:05.980 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:06.241 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:06.241 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:06.241 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.565 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:11.565 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:11.565 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:13:11.566 BYT; 00:13:11.566 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:13:11.566 BYT; 00:13:11.566 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:11.566 12:44:11 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:13:11.566 12:44:11 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:13:12.512 The operation has completed successfully. 00:13:12.512 12:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:13:13.898 The operation has completed successfully. 00:13:13.898 12:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:14.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.729 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.729 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.729 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.729 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:14.729 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:13:14.729 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.729 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:14.729 [] 00:13:14.729 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.729 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:13:14.729 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:13:14.729 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:13:14.729 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:14.987 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:13:14.987 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.987 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.251 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.251 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:13:15.251 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.251 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.251 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.251 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:13:15.251 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:13:15.251 12:44:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.251 12:44:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:15.251 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.251 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:13:15.252 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0485f5b2-8ca0-40c9-a0f2-dc315916a066"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0485f5b2-8ca0-40c9-a0f2-dc315916a066",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4cea906c-a66d-4f32-a2e9-4399754fd2f4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4cea906c-a66d-4f32-a2e9-4399754fd2f4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6aa047c2-e283-40d8-8114-bf85187e2816"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6aa047c2-e283-40d8-8114-bf85187e2816",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e2208eec-967f-4908-8efe-c049cf8f7020"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e2208eec-967f-4908-8efe-c049cf8f7020",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7645c0c4-f545-435a-9d34-972dc3c81a4e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7645c0c4-f545-435a-9d34-972dc3c81a4e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:13:15.252 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:13:15.252 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:13:15.252 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:13:15.252 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:13:15.252 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72849 00:13:15.252 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72849 ']' 00:13:15.252 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72849 00:13:15.252 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:13:15.252 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.252 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72849 00:13:15.512 killing process with pid 72849 00:13:15.512 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.512 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.512 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72849' 00:13:15.512 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72849 00:13:15.512 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72849 00:13:15.773 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:15.773 12:44:15 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:15.773 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:15.773 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.773 12:44:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:16.033 ************************************ 00:13:16.033 START TEST bdev_hello_world 00:13:16.033 ************************************ 00:13:16.033 12:44:15 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:13:16.033 [2024-12-05 12:44:15.703417] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:16.033 [2024-12-05 12:44:15.704509] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73469 ] 00:13:16.033 [2024-12-05 12:44:15.871112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.294 [2024-12-05 12:44:15.914337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.555 [2024-12-05 12:44:16.362150] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:16.555 [2024-12-05 12:44:16.362501] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:13:16.555 [2024-12-05 12:44:16.362561] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:16.555 [2024-12-05 12:44:16.365391] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:16.555 [2024-12-05 12:44:16.366640] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:16.555 [2024-12-05 12:44:16.366826] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:16.555 [2024-12-05 12:44:16.367497] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:16.555 00:13:16.555 [2024-12-05 12:44:16.367633] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:16.819 00:13:16.819 real 0m1.035s 00:13:16.819 user 0m0.652s 00:13:16.819 sys 0m0.270s 00:13:16.820 12:44:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.820 ************************************ 00:13:16.820 END TEST bdev_hello_world 00:13:16.820 ************************************ 00:13:16.820 12:44:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:17.080 12:44:16 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:13:17.080 12:44:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:17.080 12:44:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.080 12:44:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:17.080 ************************************ 00:13:17.081 START TEST bdev_bounds 00:13:17.081 ************************************ 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73500 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73500' 00:13:17.081 Process bdevio pid: 73500 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73500 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73500 ']' 00:13:17.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:17.081 12:44:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:17.081 [2024-12-05 12:44:16.825065] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:17.081 [2024-12-05 12:44:16.825262] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73500 ] 00:13:17.344 [2024-12-05 12:44:16.994835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.344 [2024-12-05 12:44:17.041131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.344 [2024-12-05 12:44:17.041543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.344 [2024-12-05 12:44:17.041596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.972 12:44:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.972 12:44:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:17.972 12:44:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:18.255 I/O targets: 00:13:18.255 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:18.255 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:13:18.255 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:13:18.255 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:18.255 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:18.255 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:18.255 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:18.255 00:13:18.255 00:13:18.255 CUnit - A unit testing framework for C - Version 2.1-3 00:13:18.255 http://cunit.sourceforge.net/ 00:13:18.255 00:13:18.255 00:13:18.255 Suite: bdevio tests on: Nvme3n1 00:13:18.255 Test: blockdev write read block ...passed 00:13:18.255 Test: blockdev write zeroes read block ...passed 00:13:18.255 Test: blockdev write zeroes read no split ...passed 00:13:18.255 Test: blockdev write zeroes read split ...passed 00:13:18.255 Test: blockdev write zeroes read split partial ...passed 00:13:18.255 Test: blockdev reset ...[2024-12-05 12:44:17.832669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:13:18.255 [2024-12-05 12:44:17.837514] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:13:18.255 Test: blockdev write read 8 blocks ...uccessful. 00:13:18.255 passed 00:13:18.256 Test: blockdev write read size > 128k ...passed 00:13:18.256 Test: blockdev write read invalid size ...passed 00:13:18.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.256 Test: blockdev write read max offset ...passed 00:13:18.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.256 Test: blockdev writev readv 8 blocks ...passed 00:13:18.256 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.256 Test: blockdev writev readv block ...passed 00:13:18.256 Test: blockdev writev readv size > 128k ...passed 00:13:18.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.256 Test: blockdev comparev and writev ...[2024-12-05 12:44:17.858173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc0e000 len:0x1000 00:13:18.256 [2024-12-05 12:44:17.858264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:18.256 passed 00:13:18.256 Test: blockdev nvme passthru rw ...passed 00:13:18.256 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:44:17.860905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:13:18.256 Test: blockdev nvme admin passthru ...RP2 0x0 00:13:18.256 [2024-12-05 12:44:17.861074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:18.256 passed 00:13:18.256 Test: blockdev copy ...passed 00:13:18.256 Suite: bdevio tests on: Nvme2n3 00:13:18.256 Test: blockdev write read block ...passed 00:13:18.256 Test: blockdev write zeroes read block ...passed 00:13:18.256 Test: blockdev write zeroes read no split ...passed 00:13:18.256 Test: blockdev write zeroes read split ...passed 00:13:18.256 Test: blockdev write zeroes read split partial ...passed 00:13:18.256 Test: blockdev reset ...[2024-12-05 12:44:17.891988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:18.256 [2024-12-05 12:44:17.895954] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:13:18.256 Test: blockdev write read 8 blocks ...uccessful. 00:13:18.256 passed 00:13:18.256 Test: blockdev write read size > 128k ...passed 00:13:18.256 Test: blockdev write read invalid size ...passed 00:13:18.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.256 Test: blockdev write read max offset ...passed 00:13:18.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.256 Test: blockdev writev readv 8 blocks ...passed 00:13:18.256 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.256 Test: blockdev writev readv block ...passed 00:13:18.256 Test: blockdev writev readv size > 128k ...passed 00:13:18.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.256 Test: blockdev comparev and writev ...[2024-12-05 12:44:17.916239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc06000 len:0x1000 00:13:18.256 [2024-12-05 12:44:17.916441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:18.256 passed 00:13:18.256 Test: blockdev nvme passthru rw ...passed 00:13:18.256 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.256 Test: blockdev nvme admin passthru ...[2024-12-05 12:44:17.919112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:18.256 [2024-12-05 12:44:17.919168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:18.256 passed 00:13:18.256 Test: blockdev copy ...passed 00:13:18.256 Suite: bdevio tests on: Nvme2n2 00:13:18.256 Test: blockdev write read block ...passed 00:13:18.256 Test: blockdev write zeroes read block ...passed 00:13:18.256 Test: blockdev write zeroes read no split ...passed 00:13:18.256 Test: blockdev write zeroes read split ...passed 00:13:18.256 Test: blockdev write zeroes read split partial ...passed 00:13:18.256 Test: blockdev reset ...[2024-12-05 12:44:17.949511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:18.256 [2024-12-05 12:44:17.954268] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:13:18.256 passed 00:13:18.256 Test: blockdev write read 8 blocks ...passed 00:13:18.256 Test: blockdev write read size > 128k ...passed 00:13:18.256 Test: blockdev write read invalid size ...passed 00:13:18.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.256 Test: blockdev write read max offset ...passed 00:13:18.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.256 Test: blockdev writev readv 8 blocks ...passed 00:13:18.256 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.256 Test: blockdev writev readv block ...passed 00:13:18.256 Test: blockdev writev readv size > 128k ...passed 00:13:18.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.256 Test: blockdev comparev and writev ...[2024-12-05 12:44:17.974135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc08000 len:0x1000 00:13:18.256 [2024-12-05 12:44:17.974331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:18.256 passed 00:13:18.256 Test: blockdev nvme passthru rw ...passed 00:13:18.256 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:44:17.976480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:18.256 [2024-12-05 12:44:17.976526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:18.256 passed 00:13:18.256 Test: blockdev nvme admin passthru ...passed 00:13:18.256 Test: blockdev copy ...passed 00:13:18.256 Suite: bdevio tests on: Nvme2n1 00:13:18.256 Test: blockdev write read block ...passed 00:13:18.256 Test: blockdev write zeroes read block ...passed 00:13:18.256 Test: blockdev write zeroes read no split ...passed 00:13:18.256 Test: blockdev write zeroes read split ...passed 00:13:18.256 Test: blockdev write zeroes read split partial ...passed 00:13:18.256 Test: blockdev reset ...[2024-12-05 12:44:18.006242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:13:18.256 [2024-12-05 12:44:18.008918] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:13:18.256 Test: blockdev write read 8 blocks ...uccessful. 00:13:18.256 passed 00:13:18.256 Test: blockdev write read size > 128k ...passed 00:13:18.256 Test: blockdev write read invalid size ...passed 00:13:18.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.256 Test: blockdev write read max offset ...passed 00:13:18.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.256 Test: blockdev writev readv 8 blocks ...passed 00:13:18.256 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.256 Test: blockdev writev readv block ...passed 00:13:18.256 Test: blockdev writev readv size > 128k ...passed 00:13:18.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.256 Test: blockdev comparev and writev ...[2024-12-05 12:44:18.027974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ec43d000 len:0x1000 00:13:18.256 [2024-12-05 12:44:18.028039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:18.256 passed 00:13:18.256 Test: blockdev nvme passthru rw ...passed 00:13:18.256 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:44:18.030714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:13:18.256 [2024-12-05 12:44:18.030756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:13:18.256 passed 00:13:18.256 Test: blockdev nvme admin passthru ...passed 00:13:18.256 Test: blockdev copy ...passed 00:13:18.256 Suite: bdevio tests on: Nvme1n1p2 00:13:18.256 Test: blockdev write read block ...passed 00:13:18.256 Test: blockdev write zeroes read block ...passed 00:13:18.256 Test: blockdev write zeroes read no split ...passed 00:13:18.256 Test: blockdev write zeroes read split ...passed 00:13:18.256 Test: blockdev write zeroes read split partial ...passed 00:13:18.256 Test: blockdev reset ...[2024-12-05 12:44:18.064791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:18.256 [2024-12-05 12:44:18.068445] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:13:18.257 Test: blockdev write read 8 blocks ...uccessful. 00:13:18.257 passed 00:13:18.257 Test: blockdev write read size > 128k ...passed 00:13:18.257 Test: blockdev write read invalid size ...passed 00:13:18.257 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.257 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.257 Test: blockdev write read max offset ...passed 00:13:18.257 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.257 Test: blockdev writev readv 8 blocks ...passed 00:13:18.257 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.257 Test: blockdev writev readv block ...passed 00:13:18.257 Test: blockdev writev readv size > 128k ...passed 00:13:18.257 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.257 Test: blockdev comparev and writev ...[2024-12-05 12:44:18.088902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ec439000 len:0x1000 00:13:18.257 [2024-12-05 12:44:18.089100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:18.257 passed 00:13:18.257 Test: blockdev nvme passthru rw ...passed 00:13:18.257 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.257 Test: blockdev nvme admin passthru ...passed 00:13:18.257 Test: blockdev copy ...passed 00:13:18.257 Suite: bdevio tests on: Nvme1n1p1 00:13:18.257 Test: blockdev write read block ...passed 00:13:18.257 Test: blockdev write zeroes read block ...passed 00:13:18.257 Test: blockdev write zeroes read no split ...passed 00:13:18.519 Test: blockdev write zeroes read split ...passed 00:13:18.519 Test: blockdev write zeroes read split partial ...passed 00:13:18.519 Test: blockdev reset ...[2024-12-05 12:44:18.115561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:13:18.519 [2024-12-05 12:44:18.120230] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:13:18.519 Test: blockdev write read 8 blocks ...uccessful. 00:13:18.519 passed 00:13:18.519 Test: blockdev write read size > 128k ...passed 00:13:18.519 Test: blockdev write read invalid size ...passed 00:13:18.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.519 Test: blockdev write read max offset ...passed 00:13:18.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.519 Test: blockdev writev readv 8 blocks ...passed 00:13:18.519 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.519 Test: blockdev writev readv block ...passed 00:13:18.519 Test: blockdev writev readv size > 128k ...passed 00:13:18.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.519 Test: blockdev comparev and writev ...[2024-12-05 12:44:18.140346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ec435000 len:0x1000 00:13:18.519 [2024-12-05 12:44:18.140415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:13:18.519 passed 00:13:18.519 Test: blockdev nvme passthru rw ...passed 00:13:18.519 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.519 Test: blockdev nvme admin passthru ...passed 00:13:18.519 Test: blockdev copy ...passed 00:13:18.519 Suite: bdevio tests on: Nvme0n1 00:13:18.519 Test: blockdev write read block ...passed 00:13:18.519 Test: blockdev write zeroes read block ...passed 00:13:18.519 Test: blockdev write zeroes read no split ...passed 00:13:18.519 Test: blockdev write zeroes read split ...passed 00:13:18.519 Test: blockdev write zeroes read split partial ...passed 00:13:18.519 Test: blockdev reset ...[2024-12-05 12:44:18.169934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:18.519 [2024-12-05 12:44:18.173661] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:13:18.519 Test: blockdev write read 8 blocks ...uccessful. 00:13:18.519 passed 00:13:18.519 Test: blockdev write read size > 128k ...passed 00:13:18.519 Test: blockdev write read invalid size ...passed 00:13:18.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.519 Test: blockdev write read max offset ...passed 00:13:18.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.519 Test: blockdev writev readv 8 blocks ...passed 00:13:18.519 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.519 Test: blockdev writev readv block ...passed 00:13:18.519 Test: blockdev writev readv size > 128k ...passed 00:13:18.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.519 Test: blockdev comparev and writev ...passed 00:13:18.519 Test: blockdev nvme passthru rw ...[2024-12-05 12:44:18.191705] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:13:18.519 separate metadata which is not supported yet. 00:13:18.519 passed 00:13:18.519 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:44:18.193645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:13:18.519 [2024-12-05 12:44:18.193707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:13:18.519 passed 00:13:18.519 Test: blockdev nvme admin passthru ...passed 00:13:18.519 Test: blockdev copy ...passed 00:13:18.519 00:13:18.519 Run Summary: Type Total Ran Passed Failed Inactive 00:13:18.519 suites 7 7 n/a 0 0 00:13:18.519 tests 161 161 161 0 0 00:13:18.519 asserts 1025 1025 1025 0 n/a 00:13:18.519 00:13:18.519 Elapsed time = 0.886 seconds 00:13:18.519 0 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73500 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73500 ']' 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73500 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73500 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.519 killing process with pid 73500 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73500' 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73500 00:13:18.519 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73500 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:18.777 00:13:18.777 real 0m1.749s 00:13:18.777 user 0m4.216s 00:13:18.777 sys 0m0.420s 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:18.777 ************************************ 00:13:18.777 END TEST bdev_bounds 00:13:18.777 ************************************ 00:13:18.777 12:44:18 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:18.777 12:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:18.777 12:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.777 12:44:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:18.777 ************************************ 00:13:18.777 START TEST bdev_nbd 00:13:18.777 ************************************ 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:18.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73548 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73548 /var/tmp/spdk-nbd.sock 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73548 ']' 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:18.777 12:44:18 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:19.034 [2024-12-05 12:44:18.630587] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:19.034 [2024-12-05 12:44:18.630896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.034 [2024-12-05 12:44:18.786221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.034 [2024-12-05 12:44:18.811463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.965 1+0 records in 00:13:19.965 1+0 records out 00:13:19.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326249 s, 12.6 MB/s 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:19.965 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:13:20.223 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:20.223 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:20.223 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:20.223 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:20.223 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.224 1+0 records in 00:13:20.224 1+0 records out 00:13:20.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696054 s, 5.9 MB/s 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:20.224 12:44:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.480 1+0 records in 00:13:20.480 1+0 records out 00:13:20.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543434 s, 7.5 MB/s 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.480 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:20.481 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.481 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.481 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:20.481 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:20.481 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:20.481 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.736 1+0 records in 00:13:20.736 1+0 records out 00:13:20.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769411 s, 5.3 MB/s 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:20.736 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.995 1+0 records in 00:13:20.995 1+0 records out 00:13:20.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680715 s, 6.0 MB/s 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:20.995 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.253 1+0 records in 00:13:21.253 1+0 records out 00:13:21.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792422 s, 5.2 MB/s 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:21.253 12:44:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.512 1+0 records in 00:13:21.512 1+0 records out 00:13:21.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501646 s, 8.2 MB/s 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:13:21.512 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd0", 00:13:21.770 "bdev_name": "Nvme0n1" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd1", 00:13:21.770 "bdev_name": "Nvme1n1p1" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd2", 00:13:21.770 "bdev_name": "Nvme1n1p2" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd3", 00:13:21.770 "bdev_name": "Nvme2n1" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd4", 00:13:21.770 "bdev_name": "Nvme2n2" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd5", 00:13:21.770 "bdev_name": "Nvme2n3" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd6", 00:13:21.770 "bdev_name": "Nvme3n1" 00:13:21.770 } 00:13:21.770 ]' 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd0", 00:13:21.770 "bdev_name": "Nvme0n1" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd1", 00:13:21.770 "bdev_name": "Nvme1n1p1" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd2", 00:13:21.770 "bdev_name": "Nvme1n1p2" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd3", 00:13:21.770 "bdev_name": "Nvme2n1" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd4", 00:13:21.770 "bdev_name": "Nvme2n2" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd5", 00:13:21.770 "bdev_name": "Nvme2n3" 00:13:21.770 }, 00:13:21.770 { 00:13:21.770 "nbd_device": "/dev/nbd6", 00:13:21.770 "bdev_name": "Nvme3n1" 00:13:21.770 } 00:13:21.770 ]' 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.770 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:22.027 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.028 12:44:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.285 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.544 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.802 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.060 12:44:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:23.318 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:13:23.575 /dev/nbd0 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.575 1+0 records in 00:13:23.575 1+0 records out 00:13:23.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043333 s, 9.5 MB/s 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:23.575 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:13:23.832 /dev/nbd1 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.832 1+0 records in 00:13:23.832 1+0 records out 00:13:23.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394685 s, 10.4 MB/s 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:23.832 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:13:24.089 /dev/nbd10 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.089 1+0 records in 00:13:24.089 1+0 records out 00:13:24.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456016 s, 9.0 MB/s 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:24.089 12:44:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:13:24.346 /dev/nbd11 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.346 1+0 records in 00:13:24.346 1+0 records out 00:13:24.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708075 s, 5.8 MB/s 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:24.346 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:13:24.603 /dev/nbd12 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.603 1+0 records in 00:13:24.603 1+0 records out 00:13:24.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594885 s, 6.9 MB/s 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:24.603 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:13:24.861 /dev/nbd13 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.861 1+0 records in 00:13:24.861 1+0 records out 00:13:24.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349844 s, 11.7 MB/s 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:24.861 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:13:25.118 /dev/nbd14 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.118 1+0 records in 00:13:25.118 1+0 records out 00:13:25.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044672 s, 9.2 MB/s 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.118 12:44:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd0", 00:13:25.397 "bdev_name": "Nvme0n1" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd1", 00:13:25.397 "bdev_name": "Nvme1n1p1" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd10", 00:13:25.397 "bdev_name": "Nvme1n1p2" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd11", 00:13:25.397 "bdev_name": "Nvme2n1" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd12", 00:13:25.397 "bdev_name": "Nvme2n2" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd13", 00:13:25.397 "bdev_name": "Nvme2n3" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd14", 00:13:25.397 "bdev_name": "Nvme3n1" 00:13:25.397 } 00:13:25.397 ]' 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd0", 00:13:25.397 "bdev_name": "Nvme0n1" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd1", 00:13:25.397 "bdev_name": "Nvme1n1p1" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd10", 00:13:25.397 "bdev_name": "Nvme1n1p2" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd11", 00:13:25.397 "bdev_name": "Nvme2n1" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd12", 00:13:25.397 "bdev_name": "Nvme2n2" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd13", 00:13:25.397 "bdev_name": "Nvme2n3" 00:13:25.397 }, 00:13:25.397 { 00:13:25.397 "nbd_device": "/dev/nbd14", 00:13:25.397 "bdev_name": "Nvme3n1" 00:13:25.397 } 00:13:25.397 ]' 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:25.397 /dev/nbd1 00:13:25.397 /dev/nbd10 00:13:25.397 /dev/nbd11 00:13:25.397 /dev/nbd12 00:13:25.397 /dev/nbd13 00:13:25.397 /dev/nbd14' 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:25.397 /dev/nbd1 00:13:25.397 /dev/nbd10 00:13:25.397 /dev/nbd11 00:13:25.397 /dev/nbd12 00:13:25.397 /dev/nbd13 00:13:25.397 /dev/nbd14' 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:25.397 256+0 records in 00:13:25.397 256+0 records out 00:13:25.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717339 s, 146 MB/s 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:25.397 256+0 records in 00:13:25.397 256+0 records out 00:13:25.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108562 s, 9.7 MB/s 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:25.397 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:25.653 256+0 records in 00:13:25.653 256+0 records out 00:13:25.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0808338 s, 13.0 MB/s 00:13:25.653 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:25.653 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:25.653 256+0 records in 00:13:25.653 256+0 records out 00:13:25.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0760787 s, 13.8 MB/s 00:13:25.653 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:25.653 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:25.653 256+0 records in 00:13:25.653 256+0 records out 00:13:25.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0810841 s, 12.9 MB/s 00:13:25.653 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:25.653 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:25.912 256+0 records in 00:13:25.912 256+0 records out 00:13:25.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18589 s, 5.6 MB/s 00:13:25.912 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:25.912 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:26.171 256+0 records in 00:13:26.171 256+0 records out 00:13:26.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193593 s, 5.4 MB/s 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:26.171 256+0 records in 00:13:26.171 256+0 records out 00:13:26.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123465 s, 8.5 MB/s 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:26.171 12:44:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:26.171 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:26.171 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:26.171 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:26.171 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:26.171 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:26.171 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:26.171 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:26.171 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.427 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.428 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.428 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.428 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.428 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.428 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.685 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.943 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.202 12:44:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.460 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.718 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:27.976 12:44:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:28.276 malloc_lvol_verify 00:13:28.276 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:28.535 7e80a527-1edf-4e11-8496-b7b35f4ef15c 00:13:28.535 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:28.793 4d2b3b5b-22cd-48a6-a29a-5313d37044be 00:13:28.793 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:28.793 /dev/nbd0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:29.051 mke2fs 1.47.0 (5-Feb-2023) 00:13:29.051 Discarding device blocks: 0/4096 done 00:13:29.051 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:29.051 00:13:29.051 Allocating group tables: 0/1 done 00:13:29.051 Writing inode tables: 0/1 done 00:13:29.051 Creating journal (1024 blocks): done 00:13:29.051 Writing superblocks and filesystem accounting information: 0/1 done 00:13:29.051 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73548 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73548 ']' 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73548 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:29.051 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.310 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73548 00:13:29.310 killing process with pid 73548 00:13:29.310 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.310 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.310 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73548' 00:13:29.310 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73548 00:13:29.310 12:44:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73548 00:13:29.310 12:44:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:29.310 00:13:29.310 real 0m10.561s 00:13:29.310 user 0m15.154s 00:13:29.310 sys 0m3.691s 00:13:29.310 12:44:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.310 12:44:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:29.310 ************************************ 00:13:29.310 END TEST bdev_nbd 00:13:29.310 ************************************ 00:13:29.571 12:44:29 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:13:29.571 12:44:29 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:13:29.571 skipping fio tests on NVMe due to multi-ns failures. 00:13:29.571 12:44:29 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:13:29.571 12:44:29 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:13:29.571 12:44:29 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:29.571 12:44:29 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:29.571 12:44:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:29.571 12:44:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.571 12:44:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:29.571 ************************************ 00:13:29.571 START TEST bdev_verify 00:13:29.571 ************************************ 00:13:29.571 12:44:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:29.571 [2024-12-05 12:44:29.253682] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:29.571 [2024-12-05 12:44:29.253803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73960 ] 00:13:29.571 [2024-12-05 12:44:29.412470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.832 [2024-12-05 12:44:29.438622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.832 [2024-12-05 12:44:29.438724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.094 Running I/O for 5 seconds... 00:13:32.501 19200.00 IOPS, 75.00 MiB/s [2024-12-05T12:44:33.289Z] 20096.00 IOPS, 78.50 MiB/s [2024-12-05T12:44:34.223Z] 21440.00 IOPS, 83.75 MiB/s [2024-12-05T12:44:35.212Z] 21728.00 IOPS, 84.88 MiB/s [2024-12-05T12:44:35.212Z] 22259.20 IOPS, 86.95 MiB/s 00:13:35.360 Latency(us) 00:13:35.360 [2024-12-05T12:44:35.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.360 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.360 Verification LBA range: start 0x0 length 0xbd0bd 00:13:35.360 Nvme0n1 : 5.05 1647.70 6.44 0.00 0.00 77389.84 16535.24 94371.84 00:13:35.360 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.360 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:35.360 Nvme0n1 : 5.07 1488.34 5.81 0.00 0.00 85801.21 17442.66 90338.86 00:13:35.360 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.360 Verification LBA range: start 0x0 length 0x4ff80 00:13:35.360 Nvme1n1p1 : 5.05 1647.30 6.43 0.00 0.00 77132.21 16636.06 75416.81 00:13:35.360 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.360 Verification LBA range: start 0x4ff80 length 0x4ff80 00:13:35.361 Nvme1n1p1 : 5.08 1487.87 5.81 0.00 0.00 85708.85 16837.71 82272.89 00:13:35.361 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x0 length 0x4ff7f 00:13:35.361 Nvme1n1p2 : 5.07 1654.16 6.46 0.00 0.00 76662.52 5545.35 73400.32 00:13:35.361 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:13:35.361 Nvme1n1p2 : 5.08 1486.96 5.81 0.00 0.00 85631.85 18249.26 78239.90 00:13:35.361 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x0 length 0x80000 00:13:35.361 Nvme2n1 : 5.08 1662.86 6.50 0.00 0.00 76159.12 9981.64 69770.63 00:13:35.361 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x80000 length 0x80000 00:13:35.361 Nvme2n1 : 5.08 1486.54 5.81 0.00 0.00 85456.05 17946.78 72190.42 00:13:35.361 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x0 length 0x80000 00:13:35.361 Nvme2n2 : 5.08 1661.77 6.49 0.00 0.00 76039.84 10788.23 71787.13 00:13:35.361 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x80000 length 0x80000 00:13:35.361 Nvme2n2 : 5.08 1485.50 5.80 0.00 0.00 85330.59 19963.27 70980.53 00:13:35.361 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x0 length 0x80000 00:13:35.361 Nvme2n3 : 5.09 1660.75 6.49 0.00 0.00 75921.24 9628.75 72593.72 00:13:35.361 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x80000 length 0x80000 00:13:35.361 Nvme2n3 : 5.09 1484.52 5.80 0.00 0.00 85164.61 19862.45 74610.22 00:13:35.361 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x0 length 0x20000 00:13:35.361 Nvme3n1 : 5.09 1671.43 6.53 0.00 0.00 75391.66 1594.29 74610.22 00:13:35.361 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:35.361 Verification LBA range: start 0x20000 length 0x20000 00:13:35.361 Nvme3n1 : 5.09 1483.78 5.80 0.00 0.00 84993.76 17140.18 78239.90 00:13:35.361 [2024-12-05T12:44:35.213Z] =================================================================================================================== 00:13:35.361 [2024-12-05T12:44:35.213Z] Total : 22009.47 85.97 0.00 0.00 80666.59 1594.29 94371.84 00:13:36.293 00:13:36.293 real 0m6.595s 00:13:36.293 user 0m12.434s 00:13:36.293 sys 0m0.249s 00:13:36.293 12:44:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.293 12:44:35 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:36.293 ************************************ 00:13:36.293 END TEST bdev_verify 00:13:36.293 ************************************ 00:13:36.293 12:44:35 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:36.293 12:44:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:36.293 12:44:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.293 12:44:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:36.293 ************************************ 00:13:36.293 START TEST bdev_verify_big_io 00:13:36.293 ************************************ 00:13:36.293 12:44:35 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:36.293 [2024-12-05 12:44:35.879260] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:36.293 [2024-12-05 12:44:35.879376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74052 ] 00:13:36.293 [2024-12-05 12:44:36.030544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:36.293 [2024-12-05 12:44:36.056246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.293 [2024-12-05 12:44:36.056276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.856 Running I/O for 5 seconds... 00:13:42.058 1623.00 IOPS, 101.44 MiB/s [2024-12-05T12:44:42.487Z] 2716.50 IOPS, 169.78 MiB/s [2024-12-05T12:44:42.744Z] 3120.67 IOPS, 195.04 MiB/s 00:13:42.892 Latency(us) 00:13:42.892 [2024-12-05T12:44:42.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.892 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.892 Verification LBA range: start 0x0 length 0xbd0b 00:13:42.892 Nvme0n1 : 5.61 118.42 7.40 0.00 0.00 1040735.90 21173.17 1025991.29 00:13:42.892 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.892 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:42.892 Nvme0n1 : 5.89 127.65 7.98 0.00 0.00 881886.50 46177.67 1574477.19 00:13:42.892 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.892 Verification LBA range: start 0x0 length 0x4ff8 00:13:42.892 Nvme1n1p1 : 5.61 119.07 7.44 0.00 0.00 1011738.19 112116.97 1000180.18 00:13:42.893 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x4ff8 length 0x4ff8 00:13:42.893 Nvme1n1p1 : 5.91 137.88 8.62 0.00 0.00 791003.62 17543.48 1374441.16 00:13:42.893 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x0 length 0x4ff7 00:13:42.893 Nvme1n1p2 : 5.69 122.83 7.68 0.00 0.00 961128.32 76626.71 1013085.74 00:13:42.893 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x4ff7 length 0x4ff7 00:13:42.893 Nvme1n1p2 : 6.01 163.28 10.20 0.00 0.00 649932.17 10284.11 1645457.72 00:13:42.893 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x0 length 0x8000 00:13:42.893 Nvme2n1 : 5.79 127.70 7.98 0.00 0.00 904427.63 64527.75 1167952.34 00:13:42.893 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x8000 length 0x8000 00:13:42.893 Nvme2n1 : 6.08 219.00 13.69 0.00 0.00 472328.98 113.43 1677721.60 00:13:42.893 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x0 length 0x8000 00:13:42.893 Nvme2n2 : 5.79 132.67 8.29 0.00 0.00 855399.71 29440.79 1077613.49 00:13:42.893 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x8000 length 0x8000 00:13:42.893 Nvme2n2 : 5.65 124.54 7.78 0.00 0.00 992698.61 16232.76 1271196.75 00:13:42.893 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x0 length 0x8000 00:13:42.893 Nvme2n3 : 5.85 137.10 8.57 0.00 0.00 803532.33 35086.97 1180857.90 00:13:42.893 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x8000 length 0x8000 00:13:42.893 Nvme2n3 : 5.65 124.49 7.78 0.00 0.00 958848.68 102437.81 1084066.26 00:13:42.893 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x0 length 0x2000 00:13:42.893 Nvme3n1 : 5.89 156.92 9.81 0.00 0.00 687016.99 2432.39 1116330.14 00:13:42.893 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.893 Verification LBA range: start 0x2000 length 0x2000 00:13:42.893 Nvme3n1 : 5.77 125.84 7.86 0.00 0.00 918089.50 102437.81 987274.63 00:13:42.893 [2024-12-05T12:44:42.745Z] =================================================================================================================== 00:13:42.893 [2024-12-05T12:44:42.745Z] Total : 1937.39 121.09 0.00 0.00 820944.96 113.43 1677721.60 00:13:43.830 00:13:43.830 real 0m7.764s 00:13:43.830 user 0m14.813s 00:13:43.830 sys 0m0.232s 00:13:43.830 12:44:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.830 12:44:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.830 ************************************ 00:13:43.830 END TEST bdev_verify_big_io 00:13:43.830 ************************************ 00:13:43.830 12:44:43 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:43.830 12:44:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:43.830 12:44:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.830 12:44:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:43.830 ************************************ 00:13:43.830 START TEST bdev_write_zeroes 00:13:43.830 ************************************ 00:13:43.830 12:44:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:44.088 [2024-12-05 12:44:43.689966] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:44.088 [2024-12-05 12:44:43.690106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74150 ] 00:13:44.088 [2024-12-05 12:44:43.846197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.088 [2024-12-05 12:44:43.869354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.656 Running I/O for 1 seconds... 00:13:45.589 58920.00 IOPS, 230.16 MiB/s 00:13:45.589 Latency(us) 00:13:45.589 [2024-12-05T12:44:45.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.589 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.589 Nvme0n1 : 1.02 8238.79 32.18 0.00 0.00 15504.03 10536.17 124215.93 00:13:45.589 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.589 Nvme1n1p1 : 1.02 8439.63 32.97 0.00 0.00 15113.53 10485.76 90742.15 00:13:45.589 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.589 Nvme1n1p2 : 1.03 8429.25 32.93 0.00 0.00 15099.17 10788.23 91548.75 00:13:45.589 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.589 Nvme2n1 : 1.03 8419.72 32.89 0.00 0.00 15070.71 10788.23 91145.45 00:13:45.589 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.589 Nvme2n2 : 1.03 8410.10 32.85 0.00 0.00 15038.57 8620.50 91145.45 00:13:45.589 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.589 Nvme2n3 : 1.03 8400.64 32.82 0.00 0.00 15018.32 6856.07 85902.57 00:13:45.589 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.589 Nvme3n1 : 1.03 8391.18 32.78 0.00 0.00 14998.23 5520.15 97598.23 00:13:45.589 [2024-12-05T12:44:45.442Z] =================================================================================================================== 00:13:45.590 [2024-12-05T12:44:45.442Z] Total : 58729.32 229.41 0.00 0.00 15118.99 5520.15 124215.93 00:13:45.846 00:13:45.846 real 0m1.868s 00:13:45.846 user 0m1.571s 00:13:45.846 sys 0m0.188s 00:13:45.846 12:44:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.846 12:44:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:45.846 ************************************ 00:13:45.847 END TEST bdev_write_zeroes 00:13:45.847 ************************************ 00:13:45.847 12:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.847 12:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:45.847 12:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.847 12:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:45.847 ************************************ 00:13:45.847 START TEST bdev_json_nonenclosed 00:13:45.847 ************************************ 00:13:45.847 12:44:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.847 [2024-12-05 12:44:45.599291] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:45.847 [2024-12-05 12:44:45.599430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74191 ] 00:13:46.104 [2024-12-05 12:44:45.764960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.104 [2024-12-05 12:44:45.789865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.104 [2024-12-05 12:44:45.789974] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:46.104 [2024-12-05 12:44:45.789994] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:46.104 [2024-12-05 12:44:45.790009] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.104 00:13:46.104 real 0m0.329s 00:13:46.104 user 0m0.123s 00:13:46.104 sys 0m0.099s 00:13:46.104 12:44:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.104 ************************************ 00:13:46.104 END TEST bdev_json_nonenclosed 00:13:46.104 ************************************ 00:13:46.104 12:44:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:46.104 12:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.104 12:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:46.104 12:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.104 12:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:46.104 ************************************ 00:13:46.104 START TEST bdev_json_nonarray 00:13:46.104 ************************************ 00:13:46.104 12:44:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.363 [2024-12-05 12:44:45.970440] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:46.363 [2024-12-05 12:44:45.970553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74212 ] 00:13:46.363 [2024-12-05 12:44:46.125934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.363 [2024-12-05 12:44:46.149939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.363 [2024-12-05 12:44:46.150050] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:46.363 [2024-12-05 12:44:46.150067] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:46.363 [2024-12-05 12:44:46.150079] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.624 00:13:46.624 real 0m0.308s 00:13:46.624 user 0m0.119s 00:13:46.624 sys 0m0.086s 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:46.624 ************************************ 00:13:46.624 END TEST bdev_json_nonarray 00:13:46.624 ************************************ 00:13:46.624 12:44:46 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:13:46.624 12:44:46 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:13:46.624 12:44:46 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:13:46.624 12:44:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:46.624 12:44:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.624 12:44:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:46.624 ************************************ 00:13:46.624 START TEST bdev_gpt_uuid 00:13:46.624 ************************************ 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74232 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74232 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74232 ']' 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:46.624 12:44:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:46.624 [2024-12-05 12:44:46.336754] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:46.624 [2024-12-05 12:44:46.337300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74232 ] 00:13:46.882 [2024-12-05 12:44:46.494835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.882 [2024-12-05 12:44:46.519075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.447 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.447 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:13:47.447 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:47.447 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.447 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 Some configs were skipped because the RPC state that can call them passed over. 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:13:47.705 { 00:13:47.705 "name": "Nvme1n1p1", 00:13:47.705 "aliases": [ 00:13:47.705 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:13:47.705 ], 00:13:47.705 "product_name": "GPT Disk", 00:13:47.705 "block_size": 4096, 00:13:47.705 "num_blocks": 655104, 00:13:47.705 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:47.705 "assigned_rate_limits": { 00:13:47.705 "rw_ios_per_sec": 0, 00:13:47.705 "rw_mbytes_per_sec": 0, 00:13:47.705 "r_mbytes_per_sec": 0, 00:13:47.705 "w_mbytes_per_sec": 0 00:13:47.705 }, 00:13:47.705 "claimed": false, 00:13:47.705 "zoned": false, 00:13:47.705 "supported_io_types": { 00:13:47.705 "read": true, 00:13:47.705 "write": true, 00:13:47.705 "unmap": true, 00:13:47.705 "flush": true, 00:13:47.705 "reset": true, 00:13:47.705 "nvme_admin": false, 00:13:47.705 "nvme_io": false, 00:13:47.705 "nvme_io_md": false, 00:13:47.705 "write_zeroes": true, 00:13:47.705 "zcopy": false, 00:13:47.705 "get_zone_info": false, 00:13:47.705 "zone_management": false, 00:13:47.705 "zone_append": false, 00:13:47.705 "compare": true, 00:13:47.705 "compare_and_write": false, 00:13:47.705 "abort": true, 00:13:47.705 "seek_hole": false, 00:13:47.705 "seek_data": false, 00:13:47.705 "copy": true, 00:13:47.705 "nvme_iov_md": false 00:13:47.705 }, 00:13:47.705 "driver_specific": { 00:13:47.705 "gpt": { 00:13:47.705 "base_bdev": "Nvme1n1", 00:13:47.705 "offset_blocks": 256, 00:13:47.705 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:13:47.705 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:13:47.705 "partition_name": "SPDK_TEST_first" 00:13:47.705 } 00:13:47.705 } 00:13:47.705 } 00:13:47.705 ]' 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:13:47.705 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:13:47.963 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:47.963 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:47.963 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:13:47.963 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:13:47.963 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.963 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:47.963 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.963 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:13:47.963 { 00:13:47.963 "name": "Nvme1n1p2", 00:13:47.963 "aliases": [ 00:13:47.963 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:13:47.963 ], 00:13:47.963 "product_name": "GPT Disk", 00:13:47.963 "block_size": 4096, 00:13:47.963 "num_blocks": 655103, 00:13:47.963 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:47.963 "assigned_rate_limits": { 00:13:47.963 "rw_ios_per_sec": 0, 00:13:47.963 "rw_mbytes_per_sec": 0, 00:13:47.963 "r_mbytes_per_sec": 0, 00:13:47.963 "w_mbytes_per_sec": 0 00:13:47.963 }, 00:13:47.963 "claimed": false, 00:13:47.963 "zoned": false, 00:13:47.963 "supported_io_types": { 00:13:47.963 "read": true, 00:13:47.963 "write": true, 00:13:47.963 "unmap": true, 00:13:47.963 "flush": true, 00:13:47.963 "reset": true, 00:13:47.963 "nvme_admin": false, 00:13:47.963 "nvme_io": false, 00:13:47.963 "nvme_io_md": false, 00:13:47.963 "write_zeroes": true, 00:13:47.963 "zcopy": false, 00:13:47.963 "get_zone_info": false, 00:13:47.963 "zone_management": false, 00:13:47.963 "zone_append": false, 00:13:47.963 "compare": true, 00:13:47.963 "compare_and_write": false, 00:13:47.963 "abort": true, 00:13:47.964 "seek_hole": false, 00:13:47.964 "seek_data": false, 00:13:47.964 "copy": true, 00:13:47.964 "nvme_iov_md": false 00:13:47.964 }, 00:13:47.964 "driver_specific": { 00:13:47.964 "gpt": { 00:13:47.964 "base_bdev": "Nvme1n1", 00:13:47.964 "offset_blocks": 655360, 00:13:47.964 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:13:47.964 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:13:47.964 "partition_name": "SPDK_TEST_second" 00:13:47.964 } 00:13:47.964 } 00:13:47.964 } 00:13:47.964 ]' 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 74232 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74232 ']' 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74232 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74232 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:47.964 killing process with pid 74232 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74232' 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74232 00:13:47.964 12:44:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74232 00:13:48.221 00:13:48.221 real 0m1.806s 00:13:48.221 user 0m1.931s 00:13:48.221 sys 0m0.378s 00:13:48.221 ************************************ 00:13:48.221 12:44:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.221 12:44:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:13:48.221 END TEST bdev_gpt_uuid 00:13:48.221 ************************************ 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:13:48.478 12:44:48 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:48.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:48.735 Waiting for block devices as requested 00:13:48.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:48.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:48.993 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:48.993 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:54.254 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:54.254 12:44:53 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:13:54.254 12:44:53 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:13:54.512 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:13:54.512 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:13:54.512 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:13:54.512 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:13:54.512 12:44:54 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:13:54.512 00:13:54.512 real 0m50.044s 00:13:54.512 user 1m3.374s 00:13:54.512 sys 0m8.577s 00:13:54.512 12:44:54 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.512 12:44:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:13:54.512 ************************************ 00:13:54.512 END TEST blockdev_nvme_gpt 00:13:54.512 ************************************ 00:13:54.512 12:44:54 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:54.512 12:44:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.512 12:44:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.512 12:44:54 -- common/autotest_common.sh@10 -- # set +x 00:13:54.512 ************************************ 00:13:54.512 START TEST nvme 00:13:54.512 ************************************ 00:13:54.512 12:44:54 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:13:54.512 * Looking for test storage... 00:13:54.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:54.512 12:44:54 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:54.512 12:44:54 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:13:54.512 12:44:54 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:54.512 12:44:54 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:54.512 12:44:54 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.512 12:44:54 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.512 12:44:54 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.512 12:44:54 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.512 12:44:54 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.512 12:44:54 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.512 12:44:54 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.512 12:44:54 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.512 12:44:54 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.512 12:44:54 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.512 12:44:54 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.512 12:44:54 nvme -- scripts/common.sh@344 -- # case "$op" in 00:13:54.512 12:44:54 nvme -- scripts/common.sh@345 -- # : 1 00:13:54.512 12:44:54 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.513 12:44:54 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.513 12:44:54 nvme -- scripts/common.sh@365 -- # decimal 1 00:13:54.513 12:44:54 nvme -- scripts/common.sh@353 -- # local d=1 00:13:54.513 12:44:54 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.513 12:44:54 nvme -- scripts/common.sh@355 -- # echo 1 00:13:54.513 12:44:54 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.513 12:44:54 nvme -- scripts/common.sh@366 -- # decimal 2 00:13:54.513 12:44:54 nvme -- scripts/common.sh@353 -- # local d=2 00:13:54.513 12:44:54 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.513 12:44:54 nvme -- scripts/common.sh@355 -- # echo 2 00:13:54.513 12:44:54 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.513 12:44:54 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.513 12:44:54 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.513 12:44:54 nvme -- scripts/common.sh@368 -- # return 0 00:13:54.513 12:44:54 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.513 12:44:54 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:54.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.513 --rc genhtml_branch_coverage=1 00:13:54.513 --rc genhtml_function_coverage=1 00:13:54.513 --rc genhtml_legend=1 00:13:54.513 --rc geninfo_all_blocks=1 00:13:54.513 --rc geninfo_unexecuted_blocks=1 00:13:54.513 00:13:54.513 ' 00:13:54.513 12:44:54 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:54.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.513 --rc genhtml_branch_coverage=1 00:13:54.513 --rc genhtml_function_coverage=1 00:13:54.513 --rc genhtml_legend=1 00:13:54.513 --rc geninfo_all_blocks=1 00:13:54.513 --rc geninfo_unexecuted_blocks=1 00:13:54.513 00:13:54.513 ' 00:13:54.513 12:44:54 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:54.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.513 --rc genhtml_branch_coverage=1 00:13:54.513 --rc genhtml_function_coverage=1 00:13:54.513 --rc genhtml_legend=1 00:13:54.513 --rc geninfo_all_blocks=1 00:13:54.513 --rc geninfo_unexecuted_blocks=1 00:13:54.513 00:13:54.513 ' 00:13:54.513 12:44:54 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:54.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.513 --rc genhtml_branch_coverage=1 00:13:54.513 --rc genhtml_function_coverage=1 00:13:54.513 --rc genhtml_legend=1 00:13:54.513 --rc geninfo_all_blocks=1 00:13:54.513 --rc geninfo_unexecuted_blocks=1 00:13:54.513 00:13:54.513 ' 00:13:54.513 12:44:54 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:55.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:55.333 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:55.333 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:55.333 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:55.333 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:55.613 12:44:55 nvme -- nvme/nvme.sh@79 -- # uname 00:13:55.613 12:44:55 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:13:55.613 12:44:55 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:13:55.613 12:44:55 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1075 -- # stubpid=74858 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:13:55.613 Waiting for stub to ready for secondary processes... 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74858 ]] 00:13:55.613 12:44:55 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:13:55.613 [2024-12-05 12:44:55.294912] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:13:55.613 [2024-12-05 12:44:55.295034] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:13:56.232 [2024-12-05 12:44:56.069018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:56.489 [2024-12-05 12:44:56.084770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.489 [2024-12-05 12:44:56.084993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.489 [2024-12-05 12:44:56.085380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.489 [2024-12-05 12:44:56.101232] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:13:56.489 [2024-12-05 12:44:56.101273] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:56.489 [2024-12-05 12:44:56.111665] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:56.489 [2024-12-05 12:44:56.111931] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:56.489 [2024-12-05 12:44:56.112709] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:56.489 [2024-12-05 12:44:56.112934] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:13:56.489 [2024-12-05 12:44:56.113010] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:13:56.489 [2024-12-05 12:44:56.113854] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:56.489 [2024-12-05 12:44:56.114078] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:13:56.489 [2024-12-05 12:44:56.114143] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:13:56.489 [2024-12-05 12:44:56.115078] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:56.489 [2024-12-05 12:44:56.115242] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:13:56.489 [2024-12-05 12:44:56.115319] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:13:56.489 [2024-12-05 12:44:56.115405] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:13:56.489 [2024-12-05 12:44:56.115509] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:13:56.489 12:44:56 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:13:56.489 done. 00:13:56.489 12:44:56 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:13:56.489 12:44:56 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:56.489 12:44:56 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:13:56.489 12:44:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.489 12:44:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.489 ************************************ 00:13:56.489 START TEST nvme_reset 00:13:56.489 ************************************ 00:13:56.489 12:44:56 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:13:56.746 Initializing NVMe Controllers 00:13:56.746 Skipping QEMU NVMe SSD at 0000:00:10.0 00:13:56.746 Skipping QEMU NVMe SSD at 0000:00:11.0 00:13:56.746 Skipping QEMU NVMe SSD at 0000:00:13.0 00:13:56.746 Skipping QEMU NVMe SSD at 0000:00:12.0 00:13:56.746 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:13:56.746 00:13:56.746 real 0m0.203s 00:13:56.746 user 0m0.070s 00:13:56.746 sys 0m0.087s 00:13:56.746 12:44:56 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.746 12:44:56 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:13:56.746 ************************************ 00:13:56.746 END TEST nvme_reset 00:13:56.746 ************************************ 00:13:56.746 12:44:56 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:13:56.746 12:44:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.747 12:44:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.747 12:44:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.747 ************************************ 00:13:56.747 START TEST nvme_identify 00:13:56.747 ************************************ 00:13:56.747 12:44:56 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:13:56.747 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:13:56.747 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:13:56.747 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:13:56.747 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:13:56.747 12:44:56 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:56.747 12:44:56 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:13:56.747 12:44:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:56.747 12:44:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:56.747 12:44:56 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:56.747 12:44:56 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:56.747 12:44:56 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:56.747 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:13:57.007 [2024-12-05 12:44:56.725763] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74879 terminated unexpected 00:13:57.007 ===================================================== 00:13:57.007 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:57.007 ===================================================== 00:13:57.007 Controller Capabilities/Features 00:13:57.007 ================================ 00:13:57.007 Vendor ID: 1b36 00:13:57.007 Subsystem Vendor ID: 1af4 00:13:57.007 Serial Number: 12340 00:13:57.007 Model Number: QEMU NVMe Ctrl 00:13:57.007 Firmware Version: 8.0.0 00:13:57.007 Recommended Arb Burst: 6 00:13:57.007 IEEE OUI Identifier: 00 54 52 00:13:57.007 Multi-path I/O 00:13:57.007 May have multiple subsystem ports: No 00:13:57.007 May have multiple controllers: No 00:13:57.007 Associated with SR-IOV VF: No 00:13:57.007 Max Data Transfer Size: 524288 00:13:57.007 Max Number of Namespaces: 256 00:13:57.007 Max Number of I/O Queues: 64 00:13:57.007 NVMe Specification Version (VS): 1.4 00:13:57.007 NVMe Specification Version (Identify): 1.4 00:13:57.007 Maximum Queue Entries: 2048 00:13:57.007 Contiguous Queues Required: Yes 00:13:57.007 Arbitration Mechanisms Supported 00:13:57.007 Weighted Round Robin: Not Supported 00:13:57.007 Vendor Specific: Not Supported 00:13:57.007 Reset Timeout: 7500 ms 00:13:57.007 Doorbell Stride: 4 bytes 00:13:57.007 NVM Subsystem Reset: Not Supported 00:13:57.007 Command Sets Supported 00:13:57.007 NVM Command Set: Supported 00:13:57.007 Boot Partition: Not Supported 00:13:57.007 Memory Page Size Minimum: 4096 bytes 00:13:57.007 Memory Page Size Maximum: 65536 bytes 00:13:57.007 Persistent Memory Region: Not Supported 00:13:57.007 Optional Asynchronous Events Supported 00:13:57.007 Namespace Attribute Notices: Supported 00:13:57.007 Firmware Activation Notices: Not Supported 00:13:57.007 ANA Change Notices: Not Supported 00:13:57.007 PLE Aggregate Log Change Notices: Not Supported 00:13:57.007 LBA Status Info Alert Notices: Not Supported 00:13:57.007 EGE Aggregate Log Change Notices: Not Supported 00:13:57.007 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.007 Zone Descriptor Change Notices: Not Supported 00:13:57.007 Discovery Log Change Notices: Not Supported 00:13:57.007 Controller Attributes 00:13:57.007 128-bit Host Identifier: Not Supported 00:13:57.007 Non-Operational Permissive Mode: Not Supported 00:13:57.007 NVM Sets: Not Supported 00:13:57.007 Read Recovery Levels: Not Supported 00:13:57.007 Endurance Groups: Not Supported 00:13:57.007 Predictable Latency Mode: Not Supported 00:13:57.007 Traffic Based Keep ALive: Not Supported 00:13:57.007 Namespace Granularity: Not Supported 00:13:57.007 SQ Associations: Not Supported 00:13:57.007 UUID List: Not Supported 00:13:57.007 Multi-Domain Subsystem: Not Supported 00:13:57.008 Fixed Capacity Management: Not Supported 00:13:57.008 Variable Capacity Management: Not Supported 00:13:57.008 Delete Endurance Group: Not Supported 00:13:57.008 Delete NVM Set: Not Supported 00:13:57.008 Extended LBA Formats Supported: Supported 00:13:57.008 Flexible Data Placement Supported: Not Supported 00:13:57.008 00:13:57.008 Controller Memory Buffer Support 00:13:57.008 ================================ 00:13:57.008 Supported: No 00:13:57.008 00:13:57.008 Persistent Memory Region Support 00:13:57.008 ================================ 00:13:57.008 Supported: No 00:13:57.008 00:13:57.008 Admin Command Set Attributes 00:13:57.008 ============================ 00:13:57.008 Security Send/Receive: Not Supported 00:13:57.008 Format NVM: Supported 00:13:57.008 Firmware Activate/Download: Not Supported 00:13:57.008 Namespace Management: Supported 00:13:57.008 Device Self-Test: Not Supported 00:13:57.008 Directives: Supported 00:13:57.008 NVMe-MI: Not Supported 00:13:57.008 Virtualization Management: Not Supported 00:13:57.008 Doorbell Buffer Config: Supported 00:13:57.008 Get LBA Status Capability: Not Supported 00:13:57.008 Command & Feature Lockdown Capability: Not Supported 00:13:57.008 Abort Command Limit: 4 00:13:57.008 Async Event Request Limit: 4 00:13:57.008 Number of Firmware Slots: N/A 00:13:57.008 Firmware Slot 1 Read-Only: N/A 00:13:57.008 Firmware Activation Without Reset: N/A 00:13:57.008 Multiple Update Detection Support: N/A 00:13:57.008 Firmware Update Granularity: No Information Provided 00:13:57.008 Per-Namespace SMART Log: Yes 00:13:57.008 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.008 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:57.008 Command Effects Log Page: Supported 00:13:57.008 Get Log Page Extended Data: Supported 00:13:57.008 Telemetry Log Pages: Not Supported 00:13:57.008 Persistent Event Log Pages: Not Supported 00:13:57.008 Supported Log Pages Log Page: May Support 00:13:57.008 Commands Supported & Effects Log Page: Not Supported 00:13:57.008 Feature Identifiers & Effects Log Page:May Support 00:13:57.008 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.008 Data Area 4 for Telemetry Log: Not Supported 00:13:57.008 Error Log Page Entries Supported: 1 00:13:57.008 Keep Alive: Not Supported 00:13:57.008 00:13:57.008 NVM Command Set Attributes 00:13:57.008 ========================== 00:13:57.008 Submission Queue Entry Size 00:13:57.008 Max: 64 00:13:57.008 Min: 64 00:13:57.008 Completion Queue Entry Size 00:13:57.008 Max: 16 00:13:57.008 Min: 16 00:13:57.008 Number of Namespaces: 256 00:13:57.008 Compare Command: Supported 00:13:57.008 Write Uncorrectable Command: Not Supported 00:13:57.008 Dataset Management Command: Supported 00:13:57.008 Write Zeroes Command: Supported 00:13:57.008 Set Features Save Field: Supported 00:13:57.008 Reservations: Not Supported 00:13:57.008 Timestamp: Supported 00:13:57.008 Copy: Supported 00:13:57.008 Volatile Write Cache: Present 00:13:57.008 Atomic Write Unit (Normal): 1 00:13:57.008 Atomic Write Unit (PFail): 1 00:13:57.008 Atomic Compare & Write Unit: 1 00:13:57.008 Fused Compare & Write: Not Supported 00:13:57.008 Scatter-Gather List 00:13:57.008 SGL Command Set: Supported 00:13:57.008 SGL Keyed: Not Supported 00:13:57.008 SGL Bit Bucket Descriptor: Not Supported 00:13:57.008 SGL Metadata Pointer: Not Supported 00:13:57.008 Oversized SGL: Not Supported 00:13:57.008 SGL Metadata Address: Not Supported 00:13:57.008 SGL Offset: Not Supported 00:13:57.008 Transport SGL Data Block: Not Supported 00:13:57.008 Replay Protected Memory Block: Not Supported 00:13:57.008 00:13:57.008 Firmware Slot Information 00:13:57.008 ========================= 00:13:57.008 Active slot: 1 00:13:57.008 Slot 1 Firmware Revision: 1.0 00:13:57.008 00:13:57.008 00:13:57.008 Commands Supported and Effects 00:13:57.008 ============================== 00:13:57.008 Admin Commands 00:13:57.008 -------------- 00:13:57.008 Delete I/O Submission Queue (00h): Supported 00:13:57.008 Create I/O Submission Queue (01h): Supported 00:13:57.008 Get Log Page (02h): Supported 00:13:57.008 Delete I/O Completion Queue (04h): Supported 00:13:57.008 Create I/O Completion Queue (05h): Supported 00:13:57.008 Identify (06h): Supported 00:13:57.008 Abort (08h): Supported 00:13:57.008 Set Features (09h): Supported 00:13:57.008 Get Features (0Ah): Supported 00:13:57.008 Asynchronous Event Request (0Ch): Supported 00:13:57.008 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:57.008 Directive Send (19h): Supported 00:13:57.008 Directive Receive (1Ah): Supported 00:13:57.008 Virtualization Management (1Ch): Supported 00:13:57.008 Doorbell Buffer Config (7Ch): Supported 00:13:57.008 Format NVM (80h): Supported LBA-Change 00:13:57.008 I/O Commands 00:13:57.008 ------------ 00:13:57.008 Flush (00h): Supported LBA-Change 00:13:57.008 Write (01h): Supported LBA-Change 00:13:57.008 Read (02h): Supported 00:13:57.008 Compare (05h): Supported 00:13:57.008 Write Zeroes (08h): Supported LBA-Change 00:13:57.008 Dataset Management (09h): Supported LBA-Change 00:13:57.008 Unknown (0Ch): Supported 00:13:57.008 Unknown (12h): Supported 00:13:57.008 Copy (19h): Supported LBA-Change 00:13:57.008 Unknown (1Dh): Supported LBA-Change 00:13:57.008 00:13:57.008 Error Log 00:13:57.008 ========= 00:13:57.008 00:13:57.008 Arbitration 00:13:57.008 =========== 00:13:57.008 Arbitration Burst: no limit 00:13:57.008 00:13:57.008 Power Management 00:13:57.008 ================ 00:13:57.008 Number of Power States: 1 00:13:57.008 Current Power State: Power State #0 00:13:57.008 Power State #0: 00:13:57.008 Max Power: 25.00 W 00:13:57.008 Non-Operational State: Operational 00:13:57.009 Entry Latency: 16 microseconds 00:13:57.009 Exit Latency: 4 microseconds 00:13:57.009 Relative Read Throughput: 0 00:13:57.009 Relative Read Latency: 0 00:13:57.009 Relative Write Throughput: 0 00:13:57.009 Relative Write Latency: 0 00:13:57.009 Idle Power[2024-12-05 12:44:56.726755] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74879 terminated unexpected 00:13:57.009 : Not Reported 00:13:57.009 Active Power: Not Reported 00:13:57.009 Non-Operational Permissive Mode: Not Supported 00:13:57.009 00:13:57.009 Health Information 00:13:57.009 ================== 00:13:57.009 Critical Warnings: 00:13:57.009 Available Spare Space: OK 00:13:57.009 Temperature: OK 00:13:57.009 Device Reliability: OK 00:13:57.009 Read Only: No 00:13:57.009 Volatile Memory Backup: OK 00:13:57.009 Current Temperature: 323 Kelvin (50 Celsius) 00:13:57.009 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:57.009 Available Spare: 0% 00:13:57.009 Available Spare Threshold: 0% 00:13:57.009 Life Percentage Used: 0% 00:13:57.009 Data Units Read: 706 00:13:57.009 Data Units Written: 634 00:13:57.009 Host Read Commands: 36377 00:13:57.009 Host Write Commands: 36163 00:13:57.009 Controller Busy Time: 0 minutes 00:13:57.009 Power Cycles: 0 00:13:57.009 Power On Hours: 0 hours 00:13:57.009 Unsafe Shutdowns: 0 00:13:57.009 Unrecoverable Media Errors: 0 00:13:57.009 Lifetime Error Log Entries: 0 00:13:57.009 Warning Temperature Time: 0 minutes 00:13:57.009 Critical Temperature Time: 0 minutes 00:13:57.009 00:13:57.009 Number of Queues 00:13:57.009 ================ 00:13:57.009 Number of I/O Submission Queues: 64 00:13:57.009 Number of I/O Completion Queues: 64 00:13:57.009 00:13:57.009 ZNS Specific Controller Data 00:13:57.009 ============================ 00:13:57.009 Zone Append Size Limit: 0 00:13:57.009 00:13:57.009 00:13:57.009 Active Namespaces 00:13:57.009 ================= 00:13:57.009 Namespace ID:1 00:13:57.009 Error Recovery Timeout: Unlimited 00:13:57.009 Command Set Identifier: NVM (00h) 00:13:57.009 Deallocate: Supported 00:13:57.009 Deallocated/Unwritten Error: Supported 00:13:57.009 Deallocated Read Value: All 0x00 00:13:57.009 Deallocate in Write Zeroes: Not Supported 00:13:57.009 Deallocated Guard Field: 0xFFFF 00:13:57.009 Flush: Supported 00:13:57.009 Reservation: Not Supported 00:13:57.009 Metadata Transferred as: Separate Metadata Buffer 00:13:57.009 Namespace Sharing Capabilities: Private 00:13:57.009 Size (in LBAs): 1548666 (5GiB) 00:13:57.009 Capacity (in LBAs): 1548666 (5GiB) 00:13:57.009 Utilization (in LBAs): 1548666 (5GiB) 00:13:57.009 Thin Provisioning: Not Supported 00:13:57.009 Per-NS Atomic Units: No 00:13:57.009 Maximum Single Source Range Length: 128 00:13:57.009 Maximum Copy Length: 128 00:13:57.009 Maximum Source Range Count: 128 00:13:57.009 NGUID/EUI64 Never Reused: No 00:13:57.009 Namespace Write Protected: No 00:13:57.009 Number of LBA Formats: 8 00:13:57.009 Current LBA Format: LBA Format #07 00:13:57.009 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.009 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.009 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.009 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.009 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.009 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.009 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.009 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.009 00:13:57.009 NVM Specific Namespace Data 00:13:57.009 =========================== 00:13:57.009 Logical Block Storage Tag Mask: 0 00:13:57.009 Protection Information Capabilities: 00:13:57.009 16b Guard Protection Information Storage Tag Support: No 00:13:57.009 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.009 Storage Tag Check Read Support: No 00:13:57.009 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.009 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.009 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.009 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.009 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.009 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.009 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.009 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.009 ===================================================== 00:13:57.009 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:57.009 ===================================================== 00:13:57.009 Controller Capabilities/Features 00:13:57.009 ================================ 00:13:57.009 Vendor ID: 1b36 00:13:57.009 Subsystem Vendor ID: 1af4 00:13:57.009 Serial Number: 12341 00:13:57.009 Model Number: QEMU NVMe Ctrl 00:13:57.009 Firmware Version: 8.0.0 00:13:57.009 Recommended Arb Burst: 6 00:13:57.009 IEEE OUI Identifier: 00 54 52 00:13:57.009 Multi-path I/O 00:13:57.009 May have multiple subsystem ports: No 00:13:57.009 May have multiple controllers: No 00:13:57.009 Associated with SR-IOV VF: No 00:13:57.009 Max Data Transfer Size: 524288 00:13:57.009 Max Number of Namespaces: 256 00:13:57.009 Max Number of I/O Queues: 64 00:13:57.009 NVMe Specification Version (VS): 1.4 00:13:57.009 NVMe Specification Version (Identify): 1.4 00:13:57.009 Maximum Queue Entries: 2048 00:13:57.010 Contiguous Queues Required: Yes 00:13:57.010 Arbitration Mechanisms Supported 00:13:57.010 Weighted Round Robin: Not Supported 00:13:57.010 Vendor Specific: Not Supported 00:13:57.010 Reset Timeout: 7500 ms 00:13:57.010 Doorbell Stride: 4 bytes 00:13:57.010 NVM Subsystem Reset: Not Supported 00:13:57.010 Command Sets Supported 00:13:57.010 NVM Command Set: Supported 00:13:57.010 Boot Partition: Not Supported 00:13:57.010 Memory Page Size Minimum: 4096 bytes 00:13:57.010 Memory Page Size Maximum: 65536 bytes 00:13:57.010 Persistent Memory Region: Not Supported 00:13:57.010 Optional Asynchronous Events Supported 00:13:57.010 Namespace Attribute Notices: Supported 00:13:57.010 Firmware Activation Notices: Not Supported 00:13:57.010 ANA Change Notices: Not Supported 00:13:57.010 PLE Aggregate Log Change Notices: Not Supported 00:13:57.010 LBA Status Info Alert Notices: Not Supported 00:13:57.010 EGE Aggregate Log Change Notices: Not Supported 00:13:57.010 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.010 Zone Descriptor Change Notices: Not Supported 00:13:57.010 Discovery Log Change Notices: Not Supported 00:13:57.010 Controller Attributes 00:13:57.010 128-bit Host Identifier: Not Supported 00:13:57.010 Non-Operational Permissive Mode: Not Supported 00:13:57.010 NVM Sets: Not Supported 00:13:57.010 Read Recovery Levels: Not Supported 00:13:57.010 Endurance Groups: Not Supported 00:13:57.010 Predictable Latency Mode: Not Supported 00:13:57.010 Traffic Based Keep ALive: Not Supported 00:13:57.010 Namespace Granularity: Not Supported 00:13:57.010 SQ Associations: Not Supported 00:13:57.010 UUID List: Not Supported 00:13:57.010 Multi-Domain Subsystem: Not Supported 00:13:57.010 Fixed Capacity Management: Not Supported 00:13:57.010 Variable Capacity Management: Not Supported 00:13:57.010 Delete Endurance Group: Not Supported 00:13:57.010 Delete NVM Set: Not Supported 00:13:57.010 Extended LBA Formats Supported: Supported 00:13:57.010 Flexible Data Placement Supported: Not Supported 00:13:57.010 00:13:57.010 Controller Memory Buffer Support 00:13:57.010 ================================ 00:13:57.010 Supported: No 00:13:57.010 00:13:57.010 Persistent Memory Region Support 00:13:57.010 ================================ 00:13:57.010 Supported: No 00:13:57.010 00:13:57.010 Admin Command Set Attributes 00:13:57.010 ============================ 00:13:57.010 Security Send/Receive: Not Supported 00:13:57.010 Format NVM: Supported 00:13:57.010 Firmware Activate/Download: Not Supported 00:13:57.010 Namespace Management: Supported 00:13:57.010 Device Self-Test: Not Supported 00:13:57.010 Directives: Supported 00:13:57.010 NVMe-MI: Not Supported 00:13:57.010 Virtualization Management: Not Supported 00:13:57.010 Doorbell Buffer Config: Supported 00:13:57.010 Get LBA Status Capability: Not Supported 00:13:57.010 Command & Feature Lockdown Capability: Not Supported 00:13:57.010 Abort Command Limit: 4 00:13:57.010 Async Event Request Limit: 4 00:13:57.010 Number of Firmware Slots: N/A 00:13:57.010 Firmware Slot 1 Read-Only: N/A 00:13:57.010 Firmware Activation Without Reset: N/A 00:13:57.010 Multiple Update Detection Support: N/A 00:13:57.010 Firmware Update Granularity: No Information Provided 00:13:57.010 Per-Namespace SMART Log: Yes 00:13:57.010 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.010 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:57.010 Command Effects Log Page: Supported 00:13:57.010 Get Log Page Extended Data: Supported 00:13:57.010 Telemetry Log Pages: Not Supported 00:13:57.010 Persistent Event Log Pages: Not Supported 00:13:57.010 Supported Log Pages Log Page: May Support 00:13:57.010 Commands Supported & Effects Log Page: Not Supported 00:13:57.010 Feature Identifiers & Effects Log Page:May Support 00:13:57.010 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.010 Data Area 4 for Telemetry Log: Not Supported 00:13:57.010 Error Log Page Entries Supported: 1 00:13:57.010 Keep Alive: Not Supported 00:13:57.010 00:13:57.010 NVM Command Set Attributes 00:13:57.010 ========================== 00:13:57.010 Submission Queue Entry Size 00:13:57.010 Max: 64 00:13:57.010 Min: 64 00:13:57.010 Completion Queue Entry Size 00:13:57.010 Max: 16 00:13:57.010 Min: 16 00:13:57.010 Number of Namespaces: 256 00:13:57.010 Compare Command: Supported 00:13:57.010 Write Uncorrectable Command: Not Supported 00:13:57.010 Dataset Management Command: Supported 00:13:57.010 Write Zeroes Command: Supported 00:13:57.010 Set Features Save Field: Supported 00:13:57.010 Reservations: Not Supported 00:13:57.010 Timestamp: Supported 00:13:57.010 Copy: Supported 00:13:57.010 Volatile Write Cache: Present 00:13:57.010 Atomic Write Unit (Normal): 1 00:13:57.010 Atomic Write Unit (PFail): 1 00:13:57.010 Atomic Compare & Write Unit: 1 00:13:57.010 Fused Compare & Write: Not Supported 00:13:57.010 Scatter-Gather List 00:13:57.010 SGL Command Set: Supported 00:13:57.010 SGL Keyed: Not Supported 00:13:57.010 SGL Bit Bucket Descriptor: Not Supported 00:13:57.010 SGL Metadata Pointer: Not Supported 00:13:57.010 Oversized SGL: Not Supported 00:13:57.010 SGL Metadata Address: Not Supported 00:13:57.010 SGL Offset: Not Supported 00:13:57.010 Transport SGL Data Block: Not Supported 00:13:57.010 Replay Protected Memory Block: Not Supported 00:13:57.010 00:13:57.010 Firmware Slot Information 00:13:57.010 ========================= 00:13:57.010 Active slot: 1 00:13:57.010 Slot 1 Firmware Revision: 1.0 00:13:57.010 00:13:57.010 00:13:57.010 Commands Supported and Effects 00:13:57.010 ============================== 00:13:57.010 Admin Commands 00:13:57.010 -------------- 00:13:57.010 Delete I/O Submission Queue (00h): Supported 00:13:57.010 Create I/O Submission Queue (01h): Supported 00:13:57.010 Get Log Page (02h): Supported 00:13:57.011 Delete I/O Completion Queue (04h): Supported 00:13:57.011 Create I/O Completion Queue (05h): Supported 00:13:57.011 Identify (06h): Supported 00:13:57.011 Abort (08h): Supported 00:13:57.011 Set Features (09h): Supported 00:13:57.011 Get Features (0Ah): Supported 00:13:57.011 Asynchronous Event Request (0Ch): Supported 00:13:57.011 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:57.011 Directive Send (19h): Supported 00:13:57.011 Directive Receive (1Ah): Supported 00:13:57.011 Virtualization Management (1Ch): Supported 00:13:57.011 Doorbell Buffer Config (7Ch): Supported 00:13:57.011 Format NVM (80h): Supported LBA-Change 00:13:57.011 I/O Commands 00:13:57.011 ------------ 00:13:57.011 Flush (00h): Supported LBA-Change 00:13:57.011 Write (01h): Supported LBA-Change 00:13:57.011 Read (02h): Supported 00:13:57.011 Compare (05h): Supported 00:13:57.011 Write Zeroes (08h): Supported LBA-Change 00:13:57.011 Dataset Management (09h): Supported LBA-Change 00:13:57.011 Unknown (0Ch): Supported 00:13:57.011 Unknown (12h): Supported 00:13:57.011 Copy (19h): Supported LBA-Change 00:13:57.011 Unknown (1Dh): Supported LBA-Change 00:13:57.011 00:13:57.011 Error Log 00:13:57.011 ========= 00:13:57.011 00:13:57.011 Arbitration 00:13:57.011 =========== 00:13:57.011 Arbitration Burst: no limit 00:13:57.011 00:13:57.011 Power Management 00:13:57.011 ================ 00:13:57.011 Number of Power States: 1 00:13:57.011 Current Power State: Power State #0 00:13:57.011 Power State #0: 00:13:57.011 Max Power: 25.00 W 00:13:57.011 Non-Operational State: Operational 00:13:57.011 Entry Latency: 16 microseconds 00:13:57.011 Exit Latency: 4 microseconds 00:13:57.011 Relative Read Throughput: 0 00:13:57.011 Relative Read Latency: 0 00:13:57.011 Relative Write Throughput: 0 00:13:57.011 Relative Write Latency: 0 00:13:57.011 Idle Power: Not Reported 00:13:57.011 Active Power: Not Reported 00:13:57.011 Non-Operational Permissive Mode: Not Supported 00:13:57.011 00:13:57.011 Health Information 00:13:57.011 ================== 00:13:57.011 Critical Warnings: 00:13:57.011 Available Spare Space: OK 00:13:57.011 Temperature: [2024-12-05 12:44:56.727516] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74879 terminated unexpected 00:13:57.011 OK 00:13:57.011 Device Reliability: OK 00:13:57.011 Read Only: No 00:13:57.011 Volatile Memory Backup: OK 00:13:57.011 Current Temperature: 323 Kelvin (50 Celsius) 00:13:57.011 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:57.011 Available Spare: 0% 00:13:57.011 Available Spare Threshold: 0% 00:13:57.011 Life Percentage Used: 0% 00:13:57.011 Data Units Read: 1119 00:13:57.011 Data Units Written: 992 00:13:57.011 Host Read Commands: 55008 00:13:57.011 Host Write Commands: 53903 00:13:57.011 Controller Busy Time: 0 minutes 00:13:57.011 Power Cycles: 0 00:13:57.011 Power On Hours: 0 hours 00:13:57.011 Unsafe Shutdowns: 0 00:13:57.011 Unrecoverable Media Errors: 0 00:13:57.011 Lifetime Error Log Entries: 0 00:13:57.011 Warning Temperature Time: 0 minutes 00:13:57.011 Critical Temperature Time: 0 minutes 00:13:57.011 00:13:57.011 Number of Queues 00:13:57.011 ================ 00:13:57.011 Number of I/O Submission Queues: 64 00:13:57.011 Number of I/O Completion Queues: 64 00:13:57.011 00:13:57.011 ZNS Specific Controller Data 00:13:57.011 ============================ 00:13:57.011 Zone Append Size Limit: 0 00:13:57.011 00:13:57.011 00:13:57.011 Active Namespaces 00:13:57.011 ================= 00:13:57.011 Namespace ID:1 00:13:57.011 Error Recovery Timeout: Unlimited 00:13:57.011 Command Set Identifier: NVM (00h) 00:13:57.011 Deallocate: Supported 00:13:57.011 Deallocated/Unwritten Error: Supported 00:13:57.011 Deallocated Read Value: All 0x00 00:13:57.011 Deallocate in Write Zeroes: Not Supported 00:13:57.011 Deallocated Guard Field: 0xFFFF 00:13:57.011 Flush: Supported 00:13:57.011 Reservation: Not Supported 00:13:57.011 Namespace Sharing Capabilities: Private 00:13:57.011 Size (in LBAs): 1310720 (5GiB) 00:13:57.011 Capacity (in LBAs): 1310720 (5GiB) 00:13:57.011 Utilization (in LBAs): 1310720 (5GiB) 00:13:57.011 Thin Provisioning: Not Supported 00:13:57.011 Per-NS Atomic Units: No 00:13:57.011 Maximum Single Source Range Length: 128 00:13:57.011 Maximum Copy Length: 128 00:13:57.011 Maximum Source Range Count: 128 00:13:57.011 NGUID/EUI64 Never Reused: No 00:13:57.011 Namespace Write Protected: No 00:13:57.011 Number of LBA Formats: 8 00:13:57.011 Current LBA Format: LBA Format #04 00:13:57.011 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.011 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.011 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.011 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.011 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.011 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.011 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.011 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.011 00:13:57.011 NVM Specific Namespace Data 00:13:57.011 =========================== 00:13:57.011 Logical Block Storage Tag Mask: 0 00:13:57.011 Protection Information Capabilities: 00:13:57.011 16b Guard Protection Information Storage Tag Support: No 00:13:57.011 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.011 Storage Tag Check Read Support: No 00:13:57.011 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.011 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.011 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.011 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.011 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.011 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.011 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.011 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.011 ===================================================== 00:13:57.011 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:57.011 ===================================================== 00:13:57.011 Controller Capabilities/Features 00:13:57.011 ================================ 00:13:57.012 Vendor ID: 1b36 00:13:57.012 Subsystem Vendor ID: 1af4 00:13:57.012 Serial Number: 12343 00:13:57.012 Model Number: QEMU NVMe Ctrl 00:13:57.012 Firmware Version: 8.0.0 00:13:57.012 Recommended Arb Burst: 6 00:13:57.012 IEEE OUI Identifier: 00 54 52 00:13:57.012 Multi-path I/O 00:13:57.012 May have multiple subsystem ports: No 00:13:57.012 May have multiple controllers: Yes 00:13:57.012 Associated with SR-IOV VF: No 00:13:57.012 Max Data Transfer Size: 524288 00:13:57.012 Max Number of Namespaces: 256 00:13:57.012 Max Number of I/O Queues: 64 00:13:57.012 NVMe Specification Version (VS): 1.4 00:13:57.012 NVMe Specification Version (Identify): 1.4 00:13:57.012 Maximum Queue Entries: 2048 00:13:57.012 Contiguous Queues Required: Yes 00:13:57.012 Arbitration Mechanisms Supported 00:13:57.012 Weighted Round Robin: Not Supported 00:13:57.012 Vendor Specific: Not Supported 00:13:57.012 Reset Timeout: 7500 ms 00:13:57.012 Doorbell Stride: 4 bytes 00:13:57.012 NVM Subsystem Reset: Not Supported 00:13:57.012 Command Sets Supported 00:13:57.012 NVM Command Set: Supported 00:13:57.012 Boot Partition: Not Supported 00:13:57.012 Memory Page Size Minimum: 4096 bytes 00:13:57.012 Memory Page Size Maximum: 65536 bytes 00:13:57.012 Persistent Memory Region: Not Supported 00:13:57.012 Optional Asynchronous Events Supported 00:13:57.012 Namespace Attribute Notices: Supported 00:13:57.012 Firmware Activation Notices: Not Supported 00:13:57.012 ANA Change Notices: Not Supported 00:13:57.012 PLE Aggregate Log Change Notices: Not Supported 00:13:57.012 LBA Status Info Alert Notices: Not Supported 00:13:57.012 EGE Aggregate Log Change Notices: Not Supported 00:13:57.012 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.012 Zone Descriptor Change Notices: Not Supported 00:13:57.012 Discovery Log Change Notices: Not Supported 00:13:57.012 Controller Attributes 00:13:57.012 128-bit Host Identifier: Not Supported 00:13:57.012 Non-Operational Permissive Mode: Not Supported 00:13:57.012 NVM Sets: Not Supported 00:13:57.012 Read Recovery Levels: Not Supported 00:13:57.012 Endurance Groups: Supported 00:13:57.012 Predictable Latency Mode: Not Supported 00:13:57.012 Traffic Based Keep ALive: Not Supported 00:13:57.012 Namespace Granularity: Not Supported 00:13:57.012 SQ Associations: Not Supported 00:13:57.012 UUID List: Not Supported 00:13:57.012 Multi-Domain Subsystem: Not Supported 00:13:57.012 Fixed Capacity Management: Not Supported 00:13:57.012 Variable Capacity Management: Not Supported 00:13:57.012 Delete Endurance Group: Not Supported 00:13:57.012 Delete NVM Set: Not Supported 00:13:57.012 Extended LBA Formats Supported: Supported 00:13:57.012 Flexible Data Placement Supported: Supported 00:13:57.012 00:13:57.012 Controller Memory Buffer Support 00:13:57.012 ================================ 00:13:57.012 Supported: No 00:13:57.012 00:13:57.012 Persistent Memory Region Support 00:13:57.012 ================================ 00:13:57.012 Supported: No 00:13:57.012 00:13:57.012 Admin Command Set Attributes 00:13:57.012 ============================ 00:13:57.012 Security Send/Receive: Not Supported 00:13:57.012 Format NVM: Supported 00:13:57.012 Firmware Activate/Download: Not Supported 00:13:57.012 Namespace Management: Supported 00:13:57.012 Device Self-Test: Not Supported 00:13:57.012 Directives: Supported 00:13:57.012 NVMe-MI: Not Supported 00:13:57.012 Virtualization Management: Not Supported 00:13:57.012 Doorbell Buffer Config: Supported 00:13:57.012 Get LBA Status Capability: Not Supported 00:13:57.012 Command & Feature Lockdown Capability: Not Supported 00:13:57.012 Abort Command Limit: 4 00:13:57.012 Async Event Request Limit: 4 00:13:57.012 Number of Firmware Slots: N/A 00:13:57.012 Firmware Slot 1 Read-Only: N/A 00:13:57.012 Firmware Activation Without Reset: N/A 00:13:57.012 Multiple Update Detection Support: N/A 00:13:57.012 Firmware Update Granularity: No Information Provided 00:13:57.012 Per-Namespace SMART Log: Yes 00:13:57.012 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.012 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:57.012 Command Effects Log Page: Supported 00:13:57.012 Get Log Page Extended Data: Supported 00:13:57.012 Telemetry Log Pages: Not Supported 00:13:57.012 Persistent Event Log Pages: Not Supported 00:13:57.012 Supported Log Pages Log Page: May Support 00:13:57.012 Commands Supported & Effects Log Page: Not Supported 00:13:57.012 Feature Identifiers & Effects Log Page:May Support 00:13:57.012 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.012 Data Area 4 for Telemetry Log: Not Supported 00:13:57.012 Error Log Page Entries Supported: 1 00:13:57.012 Keep Alive: Not Supported 00:13:57.012 00:13:57.012 NVM Command Set Attributes 00:13:57.012 ========================== 00:13:57.012 Submission Queue Entry Size 00:13:57.012 Max: 64 00:13:57.012 Min: 64 00:13:57.012 Completion Queue Entry Size 00:13:57.012 Max: 16 00:13:57.012 Min: 16 00:13:57.012 Number of Namespaces: 256 00:13:57.012 Compare Command: Supported 00:13:57.012 Write Uncorrectable Command: Not Supported 00:13:57.012 Dataset Management Command: Supported 00:13:57.012 Write Zeroes Command: Supported 00:13:57.012 Set Features Save Field: Supported 00:13:57.012 Reservations: Not Supported 00:13:57.012 Timestamp: Supported 00:13:57.013 Copy: Supported 00:13:57.013 Volatile Write Cache: Present 00:13:57.013 Atomic Write Unit (Normal): 1 00:13:57.013 Atomic Write Unit (PFail): 1 00:13:57.013 Atomic Compare & Write Unit: 1 00:13:57.013 Fused Compare & Write: Not Supported 00:13:57.013 Scatter-Gather List 00:13:57.013 SGL Command Set: Supported 00:13:57.013 SGL Keyed: Not Supported 00:13:57.013 SGL Bit Bucket Descriptor: Not Supported 00:13:57.013 SGL Metadata Pointer: Not Supported 00:13:57.013 Oversized SGL: Not Supported 00:13:57.013 SGL Metadata Address: Not Supported 00:13:57.013 SGL Offset: Not Supported 00:13:57.013 Transport SGL Data Block: Not Supported 00:13:57.013 Replay Protected Memory Block: Not Supported 00:13:57.013 00:13:57.013 Firmware Slot Information 00:13:57.013 ========================= 00:13:57.013 Active slot: 1 00:13:57.013 Slot 1 Firmware Revision: 1.0 00:13:57.013 00:13:57.013 00:13:57.013 Commands Supported and Effects 00:13:57.013 ============================== 00:13:57.013 Admin Commands 00:13:57.013 -------------- 00:13:57.013 Delete I/O Submission Queue (00h): Supported 00:13:57.013 Create I/O Submission Queue (01h): Supported 00:13:57.013 Get Log Page (02h): Supported 00:13:57.013 Delete I/O Completion Queue (04h): Supported 00:13:57.013 Create I/O Completion Queue (05h): Supported 00:13:57.013 Identify (06h): Supported 00:13:57.013 Abort (08h): Supported 00:13:57.013 Set Features (09h): Supported 00:13:57.013 Get Features (0Ah): Supported 00:13:57.013 Asynchronous Event Request (0Ch): Supported 00:13:57.013 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:57.013 Directive Send (19h): Supported 00:13:57.013 Directive Receive (1Ah): Supported 00:13:57.013 Virtualization Management (1Ch): Supported 00:13:57.013 Doorbell Buffer Config (7Ch): Supported 00:13:57.013 Format NVM (80h): Supported LBA-Change 00:13:57.013 I/O Commands 00:13:57.013 ------------ 00:13:57.013 Flush (00h): Supported LBA-Change 00:13:57.013 Write (01h): Supported LBA-Change 00:13:57.013 Read (02h): Supported 00:13:57.013 Compare (05h): Supported 00:13:57.013 Write Zeroes (08h): Supported LBA-Change 00:13:57.013 Dataset Management (09h): Supported LBA-Change 00:13:57.013 Unknown (0Ch): Supported 00:13:57.013 Unknown (12h): Supported 00:13:57.013 Copy (19h): Supported LBA-Change 00:13:57.013 Unknown (1Dh): Supported LBA-Change 00:13:57.013 00:13:57.013 Error Log 00:13:57.013 ========= 00:13:57.013 00:13:57.013 Arbitration 00:13:57.013 =========== 00:13:57.013 Arbitration Burst: no limit 00:13:57.013 00:13:57.013 Power Management 00:13:57.013 ================ 00:13:57.013 Number of Power States: 1 00:13:57.013 Current Power State: Power State #0 00:13:57.013 Power State #0: 00:13:57.013 Max Power: 25.00 W 00:13:57.013 Non-Operational State: Operational 00:13:57.013 Entry Latency: 16 microseconds 00:13:57.013 Exit Latency: 4 microseconds 00:13:57.013 Relative Read Throughput: 0 00:13:57.013 Relative Read Latency: 0 00:13:57.013 Relative Write Throughput: 0 00:13:57.013 Relative Write Latency: 0 00:13:57.013 Idle Power: Not Reported 00:13:57.013 Active Power: Not Reported 00:13:57.013 Non-Operational Permissive Mode: Not Supported 00:13:57.013 00:13:57.013 Health Information 00:13:57.013 ================== 00:13:57.013 Critical Warnings: 00:13:57.013 Available Spare Space: OK 00:13:57.013 Temperature: OK 00:13:57.013 Device Reliability: OK 00:13:57.013 Read Only: No 00:13:57.013 Volatile Memory Backup: OK 00:13:57.013 Current Temperature: 323 Kelvin (50 Celsius) 00:13:57.013 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:57.013 Available Spare: 0% 00:13:57.013 Available Spare Threshold: 0% 00:13:57.013 Life Percentage Used: 0% 00:13:57.013 Data Units Read: 795 00:13:57.013 Data Units Written: 724 00:13:57.013 Host Read Commands: 37607 00:13:57.013 Host Write Commands: 37030 00:13:57.013 Controller Busy Time: 0 minutes 00:13:57.013 Power Cycles: 0 00:13:57.013 Power On Hours: 0 hours 00:13:57.013 Unsafe Shutdowns: 0 00:13:57.013 Unrecoverable Media Errors: 0 00:13:57.013 Lifetime Error Log Entries: 0 00:13:57.013 Warning Temperature Time: 0 minutes 00:13:57.013 Critical Temperature Time: 0 minutes 00:13:57.013 00:13:57.013 Number of Queues 00:13:57.013 ================ 00:13:57.013 Number of I/O Submission Queues: 64 00:13:57.013 Number of I/O Completion Queues: 64 00:13:57.013 00:13:57.013 ZNS Specific Controller Data 00:13:57.013 ============================ 00:13:57.013 Zone Append Size Limit: 0 00:13:57.013 00:13:57.013 00:13:57.013 Active Namespaces 00:13:57.013 ================= 00:13:57.013 Namespace ID:1 00:13:57.013 Error Recovery Timeout: Unlimited 00:13:57.013 Command Set Identifier: NVM (00h) 00:13:57.013 Deallocate: Supported 00:13:57.013 Deallocated/Unwritten Error: Supported 00:13:57.013 Deallocated Read Value: All 0x00 00:13:57.013 Deallocate in Write Zeroes: Not Supported 00:13:57.013 Deallocated Guard Field: 0xFFFF 00:13:57.013 Flush: Supported 00:13:57.013 Reservation: Not Supported 00:13:57.013 Namespace Sharing Capabilities: Multiple Controllers 00:13:57.013 Size (in LBAs): 262144 (1GiB) 00:13:57.013 Capacity (in LBAs): 262144 (1GiB) 00:13:57.013 Utilization (in LBAs): 262144 (1GiB) 00:13:57.013 Thin Provisioning: Not Supported 00:13:57.013 Per-NS Atomic Units: No 00:13:57.013 Maximum Single Source Range Length: 128 00:13:57.014 Maximum Copy Length: 128 00:13:57.014 Maximum Source Range Count: 128 00:13:57.014 NGUID/EUI64 Never Reused: No 00:13:57.014 Namespace Write Protected: No 00:13:57.014 Endurance group ID: 1 00:13:57.014 Number of LBA Formats: 8 00:13:57.014 Current LBA Format: LBA Format #04 00:13:57.014 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.014 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.014 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.014 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.014 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.014 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.014 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.014 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.014 00:13:57.014 Get Feature FDP: 00:13:57.014 ================ 00:13:57.014 Enabled: Yes 00:13:57.014 FDP configuration index: 0 00:13:57.014 00:13:57.014 FDP configurations log page 00:13:57.014 =========================== 00:13:57.014 Number of FDP configurations: 1 00:13:57.014 Version: 0 00:13:57.014 Size: 112 00:13:57.014 FDP Configuration Descriptor: 0 00:13:57.014 Descriptor Size: 96 00:13:57.014 Reclaim Group Identifier format: 2 00:13:57.014 FDP Volatile Write Cache: Not Present 00:13:57.014 FDP Configuration: Valid 00:13:57.014 Vendor Specific Size: 0 00:13:57.014 Number of Reclaim Groups: 2 00:13:57.014 Number of Recalim Unit Handles: 8 00:13:57.014 Max Placement Identifiers: 128 00:13:57.014 Number of Namespaces Suppprted: 256 00:13:57.014 Reclaim unit Nominal Size: 6000000 bytes 00:13:57.014 Estimated Reclaim Unit Time Limit: Not Reported 00:13:57.014 RUH Desc #000: RUH Type: Initially Isolated 00:13:57.014 RUH Desc #001: RUH Type: Initially Isolated 00:13:57.014 RUH Desc #002: RUH Type: Initially Isolated 00:13:57.014 RUH Desc #003: RUH Type: Initially Isolated 00:13:57.014 RUH Desc #004: RUH Type: Initially Isolated 00:13:57.014 RUH Desc #005: RUH Type: Initially Isolated 00:13:57.014 RUH Desc #006: RUH Type: Initially Isolated 00:13:57.014 RUH Desc #007: RUH Type: Initially Isolated 00:13:57.014 00:13:57.014 FDP reclaim unit handle usage log page 00:13:57.014 ====================================== 00:13:57.014 Number of Reclaim Unit Handles: 8 00:13:57.014 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:57.014 RUH Usage Desc #001: RUH Attributes: Unused 00:13:57.014 RUH Usage Desc #002: RUH Attributes: Unused 00:13:57.014 RUH Usage Desc #003: RUH Attributes: Unused 00:13:57.014 RUH Usage Desc #004: RUH Attributes: Unused 00:13:57.014 RUH Usage Desc #005: RUH Attributes: Unused 00:13:57.014 RUH Usage Desc #006: RUH Attributes: Unused 00:13:57.014 RUH Usage Desc #007: RUH Attributes: Unused 00:13:57.014 00:13:57.014 FDP statistics log page 00:13:57.014 ======================= 00:13:57.014 Host bytes with metadata written: 440115200 00:13:57.014 Medi[2024-12-05 12:44:56.728601] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74879 terminated unexpected 00:13:57.014 a bytes with metadata written: 440188928 00:13:57.014 Media bytes erased: 0 00:13:57.014 00:13:57.014 FDP events log page 00:13:57.014 =================== 00:13:57.014 Number of FDP events: 0 00:13:57.014 00:13:57.014 NVM Specific Namespace Data 00:13:57.014 =========================== 00:13:57.014 Logical Block Storage Tag Mask: 0 00:13:57.014 Protection Information Capabilities: 00:13:57.014 16b Guard Protection Information Storage Tag Support: No 00:13:57.014 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.014 Storage Tag Check Read Support: No 00:13:57.014 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.014 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.014 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.014 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.014 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.014 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.014 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.014 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.014 ===================================================== 00:13:57.014 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:57.014 ===================================================== 00:13:57.014 Controller Capabilities/Features 00:13:57.014 ================================ 00:13:57.014 Vendor ID: 1b36 00:13:57.014 Subsystem Vendor ID: 1af4 00:13:57.014 Serial Number: 12342 00:13:57.014 Model Number: QEMU NVMe Ctrl 00:13:57.014 Firmware Version: 8.0.0 00:13:57.014 Recommended Arb Burst: 6 00:13:57.014 IEEE OUI Identifier: 00 54 52 00:13:57.014 Multi-path I/O 00:13:57.014 May have multiple subsystem ports: No 00:13:57.014 May have multiple controllers: No 00:13:57.014 Associated with SR-IOV VF: No 00:13:57.014 Max Data Transfer Size: 524288 00:13:57.014 Max Number of Namespaces: 256 00:13:57.014 Max Number of I/O Queues: 64 00:13:57.014 NVMe Specification Version (VS): 1.4 00:13:57.014 NVMe Specification Version (Identify): 1.4 00:13:57.014 Maximum Queue Entries: 2048 00:13:57.014 Contiguous Queues Required: Yes 00:13:57.014 Arbitration Mechanisms Supported 00:13:57.014 Weighted Round Robin: Not Supported 00:13:57.014 Vendor Specific: Not Supported 00:13:57.014 Reset Timeout: 7500 ms 00:13:57.014 Doorbell Stride: 4 bytes 00:13:57.015 NVM Subsystem Reset: Not Supported 00:13:57.015 Command Sets Supported 00:13:57.015 NVM Command Set: Supported 00:13:57.015 Boot Partition: Not Supported 00:13:57.015 Memory Page Size Minimum: 4096 bytes 00:13:57.015 Memory Page Size Maximum: 65536 bytes 00:13:57.015 Persistent Memory Region: Not Supported 00:13:57.015 Optional Asynchronous Events Supported 00:13:57.015 Namespace Attribute Notices: Supported 00:13:57.015 Firmware Activation Notices: Not Supported 00:13:57.015 ANA Change Notices: Not Supported 00:13:57.015 PLE Aggregate Log Change Notices: Not Supported 00:13:57.015 LBA Status Info Alert Notices: Not Supported 00:13:57.015 EGE Aggregate Log Change Notices: Not Supported 00:13:57.015 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.015 Zone Descriptor Change Notices: Not Supported 00:13:57.015 Discovery Log Change Notices: Not Supported 00:13:57.015 Controller Attributes 00:13:57.015 128-bit Host Identifier: Not Supported 00:13:57.015 Non-Operational Permissive Mode: Not Supported 00:13:57.015 NVM Sets: Not Supported 00:13:57.015 Read Recovery Levels: Not Supported 00:13:57.015 Endurance Groups: Not Supported 00:13:57.015 Predictable Latency Mode: Not Supported 00:13:57.015 Traffic Based Keep ALive: Not Supported 00:13:57.015 Namespace Granularity: Not Supported 00:13:57.015 SQ Associations: Not Supported 00:13:57.015 UUID List: Not Supported 00:13:57.015 Multi-Domain Subsystem: Not Supported 00:13:57.015 Fixed Capacity Management: Not Supported 00:13:57.015 Variable Capacity Management: Not Supported 00:13:57.015 Delete Endurance Group: Not Supported 00:13:57.015 Delete NVM Set: Not Supported 00:13:57.015 Extended LBA Formats Supported: Supported 00:13:57.015 Flexible Data Placement Supported: Not Supported 00:13:57.015 00:13:57.015 Controller Memory Buffer Support 00:13:57.015 ================================ 00:13:57.015 Supported: No 00:13:57.015 00:13:57.015 Persistent Memory Region Support 00:13:57.015 ================================ 00:13:57.015 Supported: No 00:13:57.015 00:13:57.015 Admin Command Set Attributes 00:13:57.015 ============================ 00:13:57.015 Security Send/Receive: Not Supported 00:13:57.015 Format NVM: Supported 00:13:57.015 Firmware Activate/Download: Not Supported 00:13:57.015 Namespace Management: Supported 00:13:57.015 Device Self-Test: Not Supported 00:13:57.015 Directives: Supported 00:13:57.015 NVMe-MI: Not Supported 00:13:57.015 Virtualization Management: Not Supported 00:13:57.015 Doorbell Buffer Config: Supported 00:13:57.015 Get LBA Status Capability: Not Supported 00:13:57.015 Command & Feature Lockdown Capability: Not Supported 00:13:57.015 Abort Command Limit: 4 00:13:57.015 Async Event Request Limit: 4 00:13:57.015 Number of Firmware Slots: N/A 00:13:57.015 Firmware Slot 1 Read-Only: N/A 00:13:57.015 Firmware Activation Without Reset: N/A 00:13:57.015 Multiple Update Detection Support: N/A 00:13:57.015 Firmware Update Granularity: No Information Provided 00:13:57.015 Per-Namespace SMART Log: Yes 00:13:57.015 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.015 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:57.015 Command Effects Log Page: Supported 00:13:57.015 Get Log Page Extended Data: Supported 00:13:57.015 Telemetry Log Pages: Not Supported 00:13:57.015 Persistent Event Log Pages: Not Supported 00:13:57.015 Supported Log Pages Log Page: May Support 00:13:57.015 Commands Supported & Effects Log Page: Not Supported 00:13:57.015 Feature Identifiers & Effects Log Page:May Support 00:13:57.015 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.015 Data Area 4 for Telemetry Log: Not Supported 00:13:57.015 Error Log Page Entries Supported: 1 00:13:57.015 Keep Alive: Not Supported 00:13:57.015 00:13:57.015 NVM Command Set Attributes 00:13:57.015 ========================== 00:13:57.015 Submission Queue Entry Size 00:13:57.015 Max: 64 00:13:57.015 Min: 64 00:13:57.015 Completion Queue Entry Size 00:13:57.015 Max: 16 00:13:57.015 Min: 16 00:13:57.015 Number of Namespaces: 256 00:13:57.015 Compare Command: Supported 00:13:57.015 Write Uncorrectable Command: Not Supported 00:13:57.015 Dataset Management Command: Supported 00:13:57.015 Write Zeroes Command: Supported 00:13:57.015 Set Features Save Field: Supported 00:13:57.015 Reservations: Not Supported 00:13:57.015 Timestamp: Supported 00:13:57.015 Copy: Supported 00:13:57.015 Volatile Write Cache: Present 00:13:57.015 Atomic Write Unit (Normal): 1 00:13:57.015 Atomic Write Unit (PFail): 1 00:13:57.016 Atomic Compare & Write Unit: 1 00:13:57.016 Fused Compare & Write: Not Supported 00:13:57.016 Scatter-Gather List 00:13:57.016 SGL Command Set: Supported 00:13:57.016 SGL Keyed: Not Supported 00:13:57.016 SGL Bit Bucket Descriptor: Not Supported 00:13:57.016 SGL Metadata Pointer: Not Supported 00:13:57.016 Oversized SGL: Not Supported 00:13:57.016 SGL Metadata Address: Not Supported 00:13:57.016 SGL Offset: Not Supported 00:13:57.016 Transport SGL Data Block: Not Supported 00:13:57.016 Replay Protected Memory Block: Not Supported 00:13:57.016 00:13:57.016 Firmware Slot Information 00:13:57.016 ========================= 00:13:57.016 Active slot: 1 00:13:57.016 Slot 1 Firmware Revision: 1.0 00:13:57.016 00:13:57.016 00:13:57.016 Commands Supported and Effects 00:13:57.016 ============================== 00:13:57.016 Admin Commands 00:13:57.016 -------------- 00:13:57.016 Delete I/O Submission Queue (00h): Supported 00:13:57.016 Create I/O Submission Queue (01h): Supported 00:13:57.016 Get Log Page (02h): Supported 00:13:57.016 Delete I/O Completion Queue (04h): Supported 00:13:57.016 Create I/O Completion Queue (05h): Supported 00:13:57.016 Identify (06h): Supported 00:13:57.016 Abort (08h): Supported 00:13:57.016 Set Features (09h): Supported 00:13:57.016 Get Features (0Ah): Supported 00:13:57.016 Asynchronous Event Request (0Ch): Supported 00:13:57.016 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:57.016 Directive Send (19h): Supported 00:13:57.016 Directive Receive (1Ah): Supported 00:13:57.016 Virtualization Management (1Ch): Supported 00:13:57.016 Doorbell Buffer Config (7Ch): Supported 00:13:57.016 Format NVM (80h): Supported LBA-Change 00:13:57.016 I/O Commands 00:13:57.016 ------------ 00:13:57.016 Flush (00h): Supported LBA-Change 00:13:57.016 Write (01h): Supported LBA-Change 00:13:57.016 Read (02h): Supported 00:13:57.016 Compare (05h): Supported 00:13:57.016 Write Zeroes (08h): Supported LBA-Change 00:13:57.016 Dataset Management (09h): Supported LBA-Change 00:13:57.016 Unknown (0Ch): Supported 00:13:57.016 Unknown (12h): Supported 00:13:57.016 Copy (19h): Supported LBA-Change 00:13:57.016 Unknown (1Dh): Supported LBA-Change 00:13:57.016 00:13:57.016 Error Log 00:13:57.016 ========= 00:13:57.016 00:13:57.016 Arbitration 00:13:57.016 =========== 00:13:57.016 Arbitration Burst: no limit 00:13:57.016 00:13:57.016 Power Management 00:13:57.016 ================ 00:13:57.016 Number of Power States: 1 00:13:57.016 Current Power State: Power State #0 00:13:57.016 Power State #0: 00:13:57.016 Max Power: 25.00 W 00:13:57.016 Non-Operational State: Operational 00:13:57.016 Entry Latency: 16 microseconds 00:13:57.016 Exit Latency: 4 microseconds 00:13:57.016 Relative Read Throughput: 0 00:13:57.016 Relative Read Latency: 0 00:13:57.016 Relative Write Throughput: 0 00:13:57.016 Relative Write Latency: 0 00:13:57.016 Idle Power: Not Reported 00:13:57.016 Active Power: Not Reported 00:13:57.016 Non-Operational Permissive Mode: Not Supported 00:13:57.016 00:13:57.016 Health Information 00:13:57.016 ================== 00:13:57.016 Critical Warnings: 00:13:57.016 Available Spare Space: OK 00:13:57.016 Temperature: OK 00:13:57.016 Device Reliability: OK 00:13:57.016 Read Only: No 00:13:57.016 Volatile Memory Backup: OK 00:13:57.016 Current Temperature: 323 Kelvin (50 Celsius) 00:13:57.016 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:57.016 Available Spare: 0% 00:13:57.016 Available Spare Threshold: 0% 00:13:57.016 Life Percentage Used: 0% 00:13:57.016 Data Units Read: 2272 00:13:57.016 Data Units Written: 2059 00:13:57.016 Host Read Commands: 111471 00:13:57.016 Host Write Commands: 109740 00:13:57.016 Controller Busy Time: 0 minutes 00:13:57.016 Power Cycles: 0 00:13:57.016 Power On Hours: 0 hours 00:13:57.016 Unsafe Shutdowns: 0 00:13:57.016 Unrecoverable Media Errors: 0 00:13:57.016 Lifetime Error Log Entries: 0 00:13:57.016 Warning Temperature Time: 0 minutes 00:13:57.016 Critical Temperature Time: 0 minutes 00:13:57.016 00:13:57.016 Number of Queues 00:13:57.016 ================ 00:13:57.016 Number of I/O Submission Queues: 64 00:13:57.016 Number of I/O Completion Queues: 64 00:13:57.016 00:13:57.016 ZNS Specific Controller Data 00:13:57.016 ============================ 00:13:57.016 Zone Append Size Limit: 0 00:13:57.016 00:13:57.016 00:13:57.016 Active Namespaces 00:13:57.016 ================= 00:13:57.016 Namespace ID:1 00:13:57.016 Error Recovery Timeout: Unlimited 00:13:57.016 Command Set Identifier: NVM (00h) 00:13:57.016 Deallocate: Supported 00:13:57.016 Deallocated/Unwritten Error: Supported 00:13:57.016 Deallocated Read Value: All 0x00 00:13:57.016 Deallocate in Write Zeroes: Not Supported 00:13:57.016 Deallocated Guard Field: 0xFFFF 00:13:57.016 Flush: Supported 00:13:57.016 Reservation: Not Supported 00:13:57.016 Namespace Sharing Capabilities: Private 00:13:57.016 Size (in LBAs): 1048576 (4GiB) 00:13:57.016 Capacity (in LBAs): 1048576 (4GiB) 00:13:57.016 Utilization (in LBAs): 1048576 (4GiB) 00:13:57.016 Thin Provisioning: Not Supported 00:13:57.016 Per-NS Atomic Units: No 00:13:57.016 Maximum Single Source Range Length: 128 00:13:57.016 Maximum Copy Length: 128 00:13:57.016 Maximum Source Range Count: 128 00:13:57.016 NGUID/EUI64 Never Reused: No 00:13:57.016 Namespace Write Protected: No 00:13:57.016 Number of LBA Formats: 8 00:13:57.016 Current LBA Format: LBA Format #04 00:13:57.016 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.016 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.016 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.016 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.016 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.016 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.016 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.016 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.016 00:13:57.016 NVM Specific Namespace Data 00:13:57.016 =========================== 00:13:57.016 Logical Block Storage Tag Mask: 0 00:13:57.016 Protection Information Capabilities: 00:13:57.016 16b Guard Protection Information Storage Tag Support: No 00:13:57.017 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.017 Storage Tag Check Read Support: No 00:13:57.017 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Namespace ID:2 00:13:57.017 Error Recovery Timeout: Unlimited 00:13:57.017 Command Set Identifier: NVM (00h) 00:13:57.017 Deallocate: Supported 00:13:57.017 Deallocated/Unwritten Error: Supported 00:13:57.017 Deallocated Read Value: All 0x00 00:13:57.017 Deallocate in Write Zeroes: Not Supported 00:13:57.017 Deallocated Guard Field: 0xFFFF 00:13:57.017 Flush: Supported 00:13:57.017 Reservation: Not Supported 00:13:57.017 Namespace Sharing Capabilities: Private 00:13:57.017 Size (in LBAs): 1048576 (4GiB) 00:13:57.017 Capacity (in LBAs): 1048576 (4GiB) 00:13:57.017 Utilization (in LBAs): 1048576 (4GiB) 00:13:57.017 Thin Provisioning: Not Supported 00:13:57.017 Per-NS Atomic Units: No 00:13:57.017 Maximum Single Source Range Length: 128 00:13:57.017 Maximum Copy Length: 128 00:13:57.017 Maximum Source Range Count: 128 00:13:57.017 NGUID/EUI64 Never Reused: No 00:13:57.017 Namespace Write Protected: No 00:13:57.017 Number of LBA Formats: 8 00:13:57.017 Current LBA Format: LBA Format #04 00:13:57.017 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.017 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.017 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.017 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.017 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.017 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.017 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.017 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.017 00:13:57.017 NVM Specific Namespace Data 00:13:57.017 =========================== 00:13:57.017 Logical Block Storage Tag Mask: 0 00:13:57.017 Protection Information Capabilities: 00:13:57.017 16b Guard Protection Information Storage Tag Support: No 00:13:57.017 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.017 Storage Tag Check Read Support: No 00:13:57.017 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Namespace ID:3 00:13:57.017 Error Recovery Timeout: Unlimited 00:13:57.017 Command Set Identifier: NVM (00h) 00:13:57.017 Deallocate: Supported 00:13:57.017 Deallocated/Unwritten Error: Supported 00:13:57.017 Deallocated Read Value: All 0x00 00:13:57.017 Deallocate in Write Zeroes: Not Supported 00:13:57.017 Deallocated Guard Field: 0xFFFF 00:13:57.017 Flush: Supported 00:13:57.017 Reservation: Not Supported 00:13:57.017 Namespace Sharing Capabilities: Private 00:13:57.017 Size (in LBAs): 1048576 (4GiB) 00:13:57.017 Capacity (in LBAs): 1048576 (4GiB) 00:13:57.017 Utilization (in LBAs): 1048576 (4GiB) 00:13:57.017 Thin Provisioning: Not Supported 00:13:57.017 Per-NS Atomic Units: No 00:13:57.017 Maximum Single Source Range Length: 128 00:13:57.017 Maximum Copy Length: 128 00:13:57.017 Maximum Source Range Count: 128 00:13:57.017 NGUID/EUI64 Never Reused: No 00:13:57.017 Namespace Write Protected: No 00:13:57.017 Number of LBA Formats: 8 00:13:57.017 Current LBA Format: LBA Format #04 00:13:57.017 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.017 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.017 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.017 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.017 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.017 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.017 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.017 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.017 00:13:57.017 NVM Specific Namespace Data 00:13:57.017 =========================== 00:13:57.017 Logical Block Storage Tag Mask: 0 00:13:57.017 Protection Information Capabilities: 00:13:57.017 16b Guard Protection Information Storage Tag Support: No 00:13:57.017 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.017 Storage Tag Check Read Support: No 00:13:57.017 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.017 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.018 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:57.018 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:13:57.276 ===================================================== 00:13:57.276 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:57.276 ===================================================== 00:13:57.276 Controller Capabilities/Features 00:13:57.276 ================================ 00:13:57.276 Vendor ID: 1b36 00:13:57.276 Subsystem Vendor ID: 1af4 00:13:57.276 Serial Number: 12340 00:13:57.276 Model Number: QEMU NVMe Ctrl 00:13:57.276 Firmware Version: 8.0.0 00:13:57.276 Recommended Arb Burst: 6 00:13:57.276 IEEE OUI Identifier: 00 54 52 00:13:57.276 Multi-path I/O 00:13:57.276 May have multiple subsystem ports: No 00:13:57.276 May have multiple controllers: No 00:13:57.276 Associated with SR-IOV VF: No 00:13:57.276 Max Data Transfer Size: 524288 00:13:57.276 Max Number of Namespaces: 256 00:13:57.276 Max Number of I/O Queues: 64 00:13:57.276 NVMe Specification Version (VS): 1.4 00:13:57.276 NVMe Specification Version (Identify): 1.4 00:13:57.276 Maximum Queue Entries: 2048 00:13:57.276 Contiguous Queues Required: Yes 00:13:57.276 Arbitration Mechanisms Supported 00:13:57.276 Weighted Round Robin: Not Supported 00:13:57.276 Vendor Specific: Not Supported 00:13:57.276 Reset Timeout: 7500 ms 00:13:57.276 Doorbell Stride: 4 bytes 00:13:57.276 NVM Subsystem Reset: Not Supported 00:13:57.276 Command Sets Supported 00:13:57.276 NVM Command Set: Supported 00:13:57.276 Boot Partition: Not Supported 00:13:57.276 Memory Page Size Minimum: 4096 bytes 00:13:57.276 Memory Page Size Maximum: 65536 bytes 00:13:57.276 Persistent Memory Region: Not Supported 00:13:57.276 Optional Asynchronous Events Supported 00:13:57.276 Namespace Attribute Notices: Supported 00:13:57.276 Firmware Activation Notices: Not Supported 00:13:57.276 ANA Change Notices: Not Supported 00:13:57.276 PLE Aggregate Log Change Notices: Not Supported 00:13:57.276 LBA Status Info Alert Notices: Not Supported 00:13:57.276 EGE Aggregate Log Change Notices: Not Supported 00:13:57.276 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.276 Zone Descriptor Change Notices: Not Supported 00:13:57.276 Discovery Log Change Notices: Not Supported 00:13:57.276 Controller Attributes 00:13:57.276 128-bit Host Identifier: Not Supported 00:13:57.276 Non-Operational Permissive Mode: Not Supported 00:13:57.276 NVM Sets: Not Supported 00:13:57.276 Read Recovery Levels: Not Supported 00:13:57.276 Endurance Groups: Not Supported 00:13:57.276 Predictable Latency Mode: Not Supported 00:13:57.276 Traffic Based Keep ALive: Not Supported 00:13:57.276 Namespace Granularity: Not Supported 00:13:57.276 SQ Associations: Not Supported 00:13:57.276 UUID List: Not Supported 00:13:57.276 Multi-Domain Subsystem: Not Supported 00:13:57.276 Fixed Capacity Management: Not Supported 00:13:57.276 Variable Capacity Management: Not Supported 00:13:57.276 Delete Endurance Group: Not Supported 00:13:57.276 Delete NVM Set: Not Supported 00:13:57.276 Extended LBA Formats Supported: Supported 00:13:57.276 Flexible Data Placement Supported: Not Supported 00:13:57.276 00:13:57.276 Controller Memory Buffer Support 00:13:57.276 ================================ 00:13:57.276 Supported: No 00:13:57.276 00:13:57.276 Persistent Memory Region Support 00:13:57.276 ================================ 00:13:57.276 Supported: No 00:13:57.276 00:13:57.276 Admin Command Set Attributes 00:13:57.276 ============================ 00:13:57.276 Security Send/Receive: Not Supported 00:13:57.276 Format NVM: Supported 00:13:57.276 Firmware Activate/Download: Not Supported 00:13:57.276 Namespace Management: Supported 00:13:57.276 Device Self-Test: Not Supported 00:13:57.276 Directives: Supported 00:13:57.276 NVMe-MI: Not Supported 00:13:57.276 Virtualization Management: Not Supported 00:13:57.276 Doorbell Buffer Config: Supported 00:13:57.276 Get LBA Status Capability: Not Supported 00:13:57.276 Command & Feature Lockdown Capability: Not Supported 00:13:57.276 Abort Command Limit: 4 00:13:57.276 Async Event Request Limit: 4 00:13:57.276 Number of Firmware Slots: N/A 00:13:57.276 Firmware Slot 1 Read-Only: N/A 00:13:57.276 Firmware Activation Without Reset: N/A 00:13:57.276 Multiple Update Detection Support: N/A 00:13:57.276 Firmware Update Granularity: No Information Provided 00:13:57.276 Per-Namespace SMART Log: Yes 00:13:57.276 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.276 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:13:57.276 Command Effects Log Page: Supported 00:13:57.276 Get Log Page Extended Data: Supported 00:13:57.276 Telemetry Log Pages: Not Supported 00:13:57.276 Persistent Event Log Pages: Not Supported 00:13:57.276 Supported Log Pages Log Page: May Support 00:13:57.276 Commands Supported & Effects Log Page: Not Supported 00:13:57.276 Feature Identifiers & Effects Log Page:May Support 00:13:57.276 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.276 Data Area 4 for Telemetry Log: Not Supported 00:13:57.276 Error Log Page Entries Supported: 1 00:13:57.276 Keep Alive: Not Supported 00:13:57.276 00:13:57.276 NVM Command Set Attributes 00:13:57.276 ========================== 00:13:57.276 Submission Queue Entry Size 00:13:57.276 Max: 64 00:13:57.276 Min: 64 00:13:57.276 Completion Queue Entry Size 00:13:57.276 Max: 16 00:13:57.276 Min: 16 00:13:57.276 Number of Namespaces: 256 00:13:57.276 Compare Command: Supported 00:13:57.277 Write Uncorrectable Command: Not Supported 00:13:57.277 Dataset Management Command: Supported 00:13:57.277 Write Zeroes Command: Supported 00:13:57.277 Set Features Save Field: Supported 00:13:57.277 Reservations: Not Supported 00:13:57.277 Timestamp: Supported 00:13:57.277 Copy: Supported 00:13:57.277 Volatile Write Cache: Present 00:13:57.277 Atomic Write Unit (Normal): 1 00:13:57.277 Atomic Write Unit (PFail): 1 00:13:57.277 Atomic Compare & Write Unit: 1 00:13:57.277 Fused Compare & Write: Not Supported 00:13:57.277 Scatter-Gather List 00:13:57.277 SGL Command Set: Supported 00:13:57.277 SGL Keyed: Not Supported 00:13:57.277 SGL Bit Bucket Descriptor: Not Supported 00:13:57.277 SGL Metadata Pointer: Not Supported 00:13:57.277 Oversized SGL: Not Supported 00:13:57.277 SGL Metadata Address: Not Supported 00:13:57.277 SGL Offset: Not Supported 00:13:57.277 Transport SGL Data Block: Not Supported 00:13:57.277 Replay Protected Memory Block: Not Supported 00:13:57.277 00:13:57.277 Firmware Slot Information 00:13:57.277 ========================= 00:13:57.277 Active slot: 1 00:13:57.277 Slot 1 Firmware Revision: 1.0 00:13:57.277 00:13:57.277 00:13:57.277 Commands Supported and Effects 00:13:57.277 ============================== 00:13:57.277 Admin Commands 00:13:57.277 -------------- 00:13:57.277 Delete I/O Submission Queue (00h): Supported 00:13:57.277 Create I/O Submission Queue (01h): Supported 00:13:57.277 Get Log Page (02h): Supported 00:13:57.277 Delete I/O Completion Queue (04h): Supported 00:13:57.277 Create I/O Completion Queue (05h): Supported 00:13:57.277 Identify (06h): Supported 00:13:57.277 Abort (08h): Supported 00:13:57.277 Set Features (09h): Supported 00:13:57.277 Get Features (0Ah): Supported 00:13:57.277 Asynchronous Event Request (0Ch): Supported 00:13:57.277 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:57.277 Directive Send (19h): Supported 00:13:57.277 Directive Receive (1Ah): Supported 00:13:57.277 Virtualization Management (1Ch): Supported 00:13:57.277 Doorbell Buffer Config (7Ch): Supported 00:13:57.277 Format NVM (80h): Supported LBA-Change 00:13:57.277 I/O Commands 00:13:57.277 ------------ 00:13:57.277 Flush (00h): Supported LBA-Change 00:13:57.277 Write (01h): Supported LBA-Change 00:13:57.277 Read (02h): Supported 00:13:57.277 Compare (05h): Supported 00:13:57.277 Write Zeroes (08h): Supported LBA-Change 00:13:57.277 Dataset Management (09h): Supported LBA-Change 00:13:57.277 Unknown (0Ch): Supported 00:13:57.277 Unknown (12h): Supported 00:13:57.277 Copy (19h): Supported LBA-Change 00:13:57.277 Unknown (1Dh): Supported LBA-Change 00:13:57.277 00:13:57.277 Error Log 00:13:57.277 ========= 00:13:57.277 00:13:57.277 Arbitration 00:13:57.277 =========== 00:13:57.277 Arbitration Burst: no limit 00:13:57.277 00:13:57.277 Power Management 00:13:57.277 ================ 00:13:57.277 Number of Power States: 1 00:13:57.277 Current Power State: Power State #0 00:13:57.277 Power State #0: 00:13:57.277 Max Power: 25.00 W 00:13:57.277 Non-Operational State: Operational 00:13:57.277 Entry Latency: 16 microseconds 00:13:57.277 Exit Latency: 4 microseconds 00:13:57.277 Relative Read Throughput: 0 00:13:57.277 Relative Read Latency: 0 00:13:57.277 Relative Write Throughput: 0 00:13:57.277 Relative Write Latency: 0 00:13:57.277 Idle Power: Not Reported 00:13:57.277 Active Power: Not Reported 00:13:57.277 Non-Operational Permissive Mode: Not Supported 00:13:57.277 00:13:57.277 Health Information 00:13:57.277 ================== 00:13:57.277 Critical Warnings: 00:13:57.277 Available Spare Space: OK 00:13:57.277 Temperature: OK 00:13:57.277 Device Reliability: OK 00:13:57.277 Read Only: No 00:13:57.277 Volatile Memory Backup: OK 00:13:57.277 Current Temperature: 323 Kelvin (50 Celsius) 00:13:57.277 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:57.277 Available Spare: 0% 00:13:57.277 Available Spare Threshold: 0% 00:13:57.277 Life Percentage Used: 0% 00:13:57.277 Data Units Read: 706 00:13:57.277 Data Units Written: 634 00:13:57.277 Host Read Commands: 36377 00:13:57.277 Host Write Commands: 36163 00:13:57.277 Controller Busy Time: 0 minutes 00:13:57.277 Power Cycles: 0 00:13:57.277 Power On Hours: 0 hours 00:13:57.277 Unsafe Shutdowns: 0 00:13:57.277 Unrecoverable Media Errors: 0 00:13:57.277 Lifetime Error Log Entries: 0 00:13:57.277 Warning Temperature Time: 0 minutes 00:13:57.277 Critical Temperature Time: 0 minutes 00:13:57.277 00:13:57.277 Number of Queues 00:13:57.277 ================ 00:13:57.277 Number of I/O Submission Queues: 64 00:13:57.277 Number of I/O Completion Queues: 64 00:13:57.277 00:13:57.277 ZNS Specific Controller Data 00:13:57.277 ============================ 00:13:57.277 Zone Append Size Limit: 0 00:13:57.277 00:13:57.277 00:13:57.277 Active Namespaces 00:13:57.277 ================= 00:13:57.277 Namespace ID:1 00:13:57.277 Error Recovery Timeout: Unlimited 00:13:57.277 Command Set Identifier: NVM (00h) 00:13:57.277 Deallocate: Supported 00:13:57.277 Deallocated/Unwritten Error: Supported 00:13:57.277 Deallocated Read Value: All 0x00 00:13:57.277 Deallocate in Write Zeroes: Not Supported 00:13:57.277 Deallocated Guard Field: 0xFFFF 00:13:57.277 Flush: Supported 00:13:57.277 Reservation: Not Supported 00:13:57.277 Metadata Transferred as: Separate Metadata Buffer 00:13:57.277 Namespace Sharing Capabilities: Private 00:13:57.277 Size (in LBAs): 1548666 (5GiB) 00:13:57.277 Capacity (in LBAs): 1548666 (5GiB) 00:13:57.277 Utilization (in LBAs): 1548666 (5GiB) 00:13:57.277 Thin Provisioning: Not Supported 00:13:57.277 Per-NS Atomic Units: No 00:13:57.277 Maximum Single Source Range Length: 128 00:13:57.277 Maximum Copy Length: 128 00:13:57.277 Maximum Source Range Count: 128 00:13:57.277 NGUID/EUI64 Never Reused: No 00:13:57.277 Namespace Write Protected: No 00:13:57.277 Number of LBA Formats: 8 00:13:57.277 Current LBA Format: LBA Format #07 00:13:57.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.277 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.277 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.277 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.277 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.277 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.277 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.277 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.277 00:13:57.277 NVM Specific Namespace Data 00:13:57.277 =========================== 00:13:57.277 Logical Block Storage Tag Mask: 0 00:13:57.277 Protection Information Capabilities: 00:13:57.277 16b Guard Protection Information Storage Tag Support: No 00:13:57.277 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.277 Storage Tag Check Read Support: No 00:13:57.277 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.277 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.277 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.277 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.277 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.277 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.278 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.278 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.278 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:57.278 12:44:56 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:13:57.536 ===================================================== 00:13:57.536 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:57.536 ===================================================== 00:13:57.536 Controller Capabilities/Features 00:13:57.536 ================================ 00:13:57.536 Vendor ID: 1b36 00:13:57.536 Subsystem Vendor ID: 1af4 00:13:57.536 Serial Number: 12341 00:13:57.536 Model Number: QEMU NVMe Ctrl 00:13:57.536 Firmware Version: 8.0.0 00:13:57.536 Recommended Arb Burst: 6 00:13:57.536 IEEE OUI Identifier: 00 54 52 00:13:57.536 Multi-path I/O 00:13:57.536 May have multiple subsystem ports: No 00:13:57.536 May have multiple controllers: No 00:13:57.536 Associated with SR-IOV VF: No 00:13:57.536 Max Data Transfer Size: 524288 00:13:57.536 Max Number of Namespaces: 256 00:13:57.536 Max Number of I/O Queues: 64 00:13:57.536 NVMe Specification Version (VS): 1.4 00:13:57.536 NVMe Specification Version (Identify): 1.4 00:13:57.536 Maximum Queue Entries: 2048 00:13:57.536 Contiguous Queues Required: Yes 00:13:57.536 Arbitration Mechanisms Supported 00:13:57.536 Weighted Round Robin: Not Supported 00:13:57.536 Vendor Specific: Not Supported 00:13:57.536 Reset Timeout: 7500 ms 00:13:57.536 Doorbell Stride: 4 bytes 00:13:57.536 NVM Subsystem Reset: Not Supported 00:13:57.536 Command Sets Supported 00:13:57.536 NVM Command Set: Supported 00:13:57.536 Boot Partition: Not Supported 00:13:57.536 Memory Page Size Minimum: 4096 bytes 00:13:57.536 Memory Page Size Maximum: 65536 bytes 00:13:57.536 Persistent Memory Region: Not Supported 00:13:57.536 Optional Asynchronous Events Supported 00:13:57.536 Namespace Attribute Notices: Supported 00:13:57.536 Firmware Activation Notices: Not Supported 00:13:57.536 ANA Change Notices: Not Supported 00:13:57.536 PLE Aggregate Log Change Notices: Not Supported 00:13:57.536 LBA Status Info Alert Notices: Not Supported 00:13:57.536 EGE Aggregate Log Change Notices: Not Supported 00:13:57.536 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.536 Zone Descriptor Change Notices: Not Supported 00:13:57.536 Discovery Log Change Notices: Not Supported 00:13:57.536 Controller Attributes 00:13:57.536 128-bit Host Identifier: Not Supported 00:13:57.536 Non-Operational Permissive Mode: Not Supported 00:13:57.536 NVM Sets: Not Supported 00:13:57.536 Read Recovery Levels: Not Supported 00:13:57.536 Endurance Groups: Not Supported 00:13:57.536 Predictable Latency Mode: Not Supported 00:13:57.536 Traffic Based Keep ALive: Not Supported 00:13:57.536 Namespace Granularity: Not Supported 00:13:57.536 SQ Associations: Not Supported 00:13:57.536 UUID List: Not Supported 00:13:57.536 Multi-Domain Subsystem: Not Supported 00:13:57.536 Fixed Capacity Management: Not Supported 00:13:57.536 Variable Capacity Management: Not Supported 00:13:57.536 Delete Endurance Group: Not Supported 00:13:57.536 Delete NVM Set: Not Supported 00:13:57.536 Extended LBA Formats Supported: Supported 00:13:57.536 Flexible Data Placement Supported: Not Supported 00:13:57.536 00:13:57.536 Controller Memory Buffer Support 00:13:57.536 ================================ 00:13:57.536 Supported: No 00:13:57.536 00:13:57.536 Persistent Memory Region Support 00:13:57.536 ================================ 00:13:57.536 Supported: No 00:13:57.536 00:13:57.536 Admin Command Set Attributes 00:13:57.536 ============================ 00:13:57.536 Security Send/Receive: Not Supported 00:13:57.537 Format NVM: Supported 00:13:57.537 Firmware Activate/Download: Not Supported 00:13:57.537 Namespace Management: Supported 00:13:57.537 Device Self-Test: Not Supported 00:13:57.537 Directives: Supported 00:13:57.537 NVMe-MI: Not Supported 00:13:57.537 Virtualization Management: Not Supported 00:13:57.537 Doorbell Buffer Config: Supported 00:13:57.537 Get LBA Status Capability: Not Supported 00:13:57.537 Command & Feature Lockdown Capability: Not Supported 00:13:57.537 Abort Command Limit: 4 00:13:57.537 Async Event Request Limit: 4 00:13:57.537 Number of Firmware Slots: N/A 00:13:57.537 Firmware Slot 1 Read-Only: N/A 00:13:57.537 Firmware Activation Without Reset: N/A 00:13:57.537 Multiple Update Detection Support: N/A 00:13:57.537 Firmware Update Granularity: No Information Provided 00:13:57.537 Per-Namespace SMART Log: Yes 00:13:57.537 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.537 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:13:57.537 Command Effects Log Page: Supported 00:13:57.537 Get Log Page Extended Data: Supported 00:13:57.537 Telemetry Log Pages: Not Supported 00:13:57.537 Persistent Event Log Pages: Not Supported 00:13:57.537 Supported Log Pages Log Page: May Support 00:13:57.537 Commands Supported & Effects Log Page: Not Supported 00:13:57.537 Feature Identifiers & Effects Log Page:May Support 00:13:57.537 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.537 Data Area 4 for Telemetry Log: Not Supported 00:13:57.537 Error Log Page Entries Supported: 1 00:13:57.537 Keep Alive: Not Supported 00:13:57.537 00:13:57.537 NVM Command Set Attributes 00:13:57.537 ========================== 00:13:57.537 Submission Queue Entry Size 00:13:57.537 Max: 64 00:13:57.537 Min: 64 00:13:57.537 Completion Queue Entry Size 00:13:57.537 Max: 16 00:13:57.537 Min: 16 00:13:57.537 Number of Namespaces: 256 00:13:57.537 Compare Command: Supported 00:13:57.537 Write Uncorrectable Command: Not Supported 00:13:57.537 Dataset Management Command: Supported 00:13:57.537 Write Zeroes Command: Supported 00:13:57.537 Set Features Save Field: Supported 00:13:57.537 Reservations: Not Supported 00:13:57.537 Timestamp: Supported 00:13:57.537 Copy: Supported 00:13:57.537 Volatile Write Cache: Present 00:13:57.537 Atomic Write Unit (Normal): 1 00:13:57.537 Atomic Write Unit (PFail): 1 00:13:57.537 Atomic Compare & Write Unit: 1 00:13:57.537 Fused Compare & Write: Not Supported 00:13:57.537 Scatter-Gather List 00:13:57.537 SGL Command Set: Supported 00:13:57.537 SGL Keyed: Not Supported 00:13:57.537 SGL Bit Bucket Descriptor: Not Supported 00:13:57.537 SGL Metadata Pointer: Not Supported 00:13:57.537 Oversized SGL: Not Supported 00:13:57.537 SGL Metadata Address: Not Supported 00:13:57.537 SGL Offset: Not Supported 00:13:57.537 Transport SGL Data Block: Not Supported 00:13:57.537 Replay Protected Memory Block: Not Supported 00:13:57.537 00:13:57.537 Firmware Slot Information 00:13:57.537 ========================= 00:13:57.537 Active slot: 1 00:13:57.537 Slot 1 Firmware Revision: 1.0 00:13:57.537 00:13:57.537 00:13:57.537 Commands Supported and Effects 00:13:57.537 ============================== 00:13:57.537 Admin Commands 00:13:57.537 -------------- 00:13:57.537 Delete I/O Submission Queue (00h): Supported 00:13:57.537 Create I/O Submission Queue (01h): Supported 00:13:57.537 Get Log Page (02h): Supported 00:13:57.537 Delete I/O Completion Queue (04h): Supported 00:13:57.537 Create I/O Completion Queue (05h): Supported 00:13:57.537 Identify (06h): Supported 00:13:57.537 Abort (08h): Supported 00:13:57.537 Set Features (09h): Supported 00:13:57.537 Get Features (0Ah): Supported 00:13:57.537 Asynchronous Event Request (0Ch): Supported 00:13:57.537 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:57.537 Directive Send (19h): Supported 00:13:57.537 Directive Receive (1Ah): Supported 00:13:57.537 Virtualization Management (1Ch): Supported 00:13:57.537 Doorbell Buffer Config (7Ch): Supported 00:13:57.537 Format NVM (80h): Supported LBA-Change 00:13:57.537 I/O Commands 00:13:57.537 ------------ 00:13:57.537 Flush (00h): Supported LBA-Change 00:13:57.537 Write (01h): Supported LBA-Change 00:13:57.537 Read (02h): Supported 00:13:57.537 Compare (05h): Supported 00:13:57.537 Write Zeroes (08h): Supported LBA-Change 00:13:57.537 Dataset Management (09h): Supported LBA-Change 00:13:57.537 Unknown (0Ch): Supported 00:13:57.537 Unknown (12h): Supported 00:13:57.537 Copy (19h): Supported LBA-Change 00:13:57.537 Unknown (1Dh): Supported LBA-Change 00:13:57.537 00:13:57.537 Error Log 00:13:57.537 ========= 00:13:57.537 00:13:57.537 Arbitration 00:13:57.537 =========== 00:13:57.537 Arbitration Burst: no limit 00:13:57.537 00:13:57.537 Power Management 00:13:57.537 ================ 00:13:57.537 Number of Power States: 1 00:13:57.537 Current Power State: Power State #0 00:13:57.537 Power State #0: 00:13:57.537 Max Power: 25.00 W 00:13:57.537 Non-Operational State: Operational 00:13:57.537 Entry Latency: 16 microseconds 00:13:57.537 Exit Latency: 4 microseconds 00:13:57.537 Relative Read Throughput: 0 00:13:57.537 Relative Read Latency: 0 00:13:57.537 Relative Write Throughput: 0 00:13:57.537 Relative Write Latency: 0 00:13:57.537 Idle Power: Not Reported 00:13:57.537 Active Power: Not Reported 00:13:57.537 Non-Operational Permissive Mode: Not Supported 00:13:57.537 00:13:57.537 Health Information 00:13:57.537 ================== 00:13:57.537 Critical Warnings: 00:13:57.537 Available Spare Space: OK 00:13:57.537 Temperature: OK 00:13:57.537 Device Reliability: OK 00:13:57.537 Read Only: No 00:13:57.537 Volatile Memory Backup: OK 00:13:57.537 Current Temperature: 323 Kelvin (50 Celsius) 00:13:57.537 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:57.537 Available Spare: 0% 00:13:57.537 Available Spare Threshold: 0% 00:13:57.537 Life Percentage Used: 0% 00:13:57.537 Data Units Read: 1119 00:13:57.537 Data Units Written: 992 00:13:57.537 Host Read Commands: 55008 00:13:57.537 Host Write Commands: 53903 00:13:57.537 Controller Busy Time: 0 minutes 00:13:57.537 Power Cycles: 0 00:13:57.537 Power On Hours: 0 hours 00:13:57.537 Unsafe Shutdowns: 0 00:13:57.537 Unrecoverable Media Errors: 0 00:13:57.537 Lifetime Error Log Entries: 0 00:13:57.537 Warning Temperature Time: 0 minutes 00:13:57.537 Critical Temperature Time: 0 minutes 00:13:57.537 00:13:57.537 Number of Queues 00:13:57.537 ================ 00:13:57.537 Number of I/O Submission Queues: 64 00:13:57.537 Number of I/O Completion Queues: 64 00:13:57.537 00:13:57.537 ZNS Specific Controller Data 00:13:57.537 ============================ 00:13:57.537 Zone Append Size Limit: 0 00:13:57.537 00:13:57.537 00:13:57.537 Active Namespaces 00:13:57.537 ================= 00:13:57.537 Namespace ID:1 00:13:57.537 Error Recovery Timeout: Unlimited 00:13:57.537 Command Set Identifier: NVM (00h) 00:13:57.537 Deallocate: Supported 00:13:57.537 Deallocated/Unwritten Error: Supported 00:13:57.537 Deallocated Read Value: All 0x00 00:13:57.537 Deallocate in Write Zeroes: Not Supported 00:13:57.537 Deallocated Guard Field: 0xFFFF 00:13:57.537 Flush: Supported 00:13:57.537 Reservation: Not Supported 00:13:57.537 Namespace Sharing Capabilities: Private 00:13:57.537 Size (in LBAs): 1310720 (5GiB) 00:13:57.537 Capacity (in LBAs): 1310720 (5GiB) 00:13:57.537 Utilization (in LBAs): 1310720 (5GiB) 00:13:57.537 Thin Provisioning: Not Supported 00:13:57.537 Per-NS Atomic Units: No 00:13:57.537 Maximum Single Source Range Length: 128 00:13:57.537 Maximum Copy Length: 128 00:13:57.537 Maximum Source Range Count: 128 00:13:57.537 NGUID/EUI64 Never Reused: No 00:13:57.537 Namespace Write Protected: No 00:13:57.537 Number of LBA Formats: 8 00:13:57.537 Current LBA Format: LBA Format #04 00:13:57.537 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.537 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.537 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.537 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.537 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.537 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.537 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.537 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.537 00:13:57.537 NVM Specific Namespace Data 00:13:57.537 =========================== 00:13:57.537 Logical Block Storage Tag Mask: 0 00:13:57.537 Protection Information Capabilities: 00:13:57.537 16b Guard Protection Information Storage Tag Support: No 00:13:57.537 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.537 Storage Tag Check Read Support: No 00:13:57.537 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.537 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.538 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.538 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.538 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.538 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.538 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.538 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.538 12:44:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:57.538 12:44:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:13:57.798 ===================================================== 00:13:57.798 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:57.798 ===================================================== 00:13:57.798 Controller Capabilities/Features 00:13:57.798 ================================ 00:13:57.798 Vendor ID: 1b36 00:13:57.798 Subsystem Vendor ID: 1af4 00:13:57.798 Serial Number: 12342 00:13:57.798 Model Number: QEMU NVMe Ctrl 00:13:57.798 Firmware Version: 8.0.0 00:13:57.798 Recommended Arb Burst: 6 00:13:57.798 IEEE OUI Identifier: 00 54 52 00:13:57.798 Multi-path I/O 00:13:57.798 May have multiple subsystem ports: No 00:13:57.798 May have multiple controllers: No 00:13:57.798 Associated with SR-IOV VF: No 00:13:57.798 Max Data Transfer Size: 524288 00:13:57.798 Max Number of Namespaces: 256 00:13:57.798 Max Number of I/O Queues: 64 00:13:57.798 NVMe Specification Version (VS): 1.4 00:13:57.798 NVMe Specification Version (Identify): 1.4 00:13:57.798 Maximum Queue Entries: 2048 00:13:57.798 Contiguous Queues Required: Yes 00:13:57.798 Arbitration Mechanisms Supported 00:13:57.798 Weighted Round Robin: Not Supported 00:13:57.798 Vendor Specific: Not Supported 00:13:57.798 Reset Timeout: 7500 ms 00:13:57.798 Doorbell Stride: 4 bytes 00:13:57.798 NVM Subsystem Reset: Not Supported 00:13:57.798 Command Sets Supported 00:13:57.798 NVM Command Set: Supported 00:13:57.798 Boot Partition: Not Supported 00:13:57.798 Memory Page Size Minimum: 4096 bytes 00:13:57.798 Memory Page Size Maximum: 65536 bytes 00:13:57.798 Persistent Memory Region: Not Supported 00:13:57.798 Optional Asynchronous Events Supported 00:13:57.798 Namespace Attribute Notices: Supported 00:13:57.798 Firmware Activation Notices: Not Supported 00:13:57.798 ANA Change Notices: Not Supported 00:13:57.798 PLE Aggregate Log Change Notices: Not Supported 00:13:57.798 LBA Status Info Alert Notices: Not Supported 00:13:57.798 EGE Aggregate Log Change Notices: Not Supported 00:13:57.798 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.798 Zone Descriptor Change Notices: Not Supported 00:13:57.798 Discovery Log Change Notices: Not Supported 00:13:57.798 Controller Attributes 00:13:57.798 128-bit Host Identifier: Not Supported 00:13:57.798 Non-Operational Permissive Mode: Not Supported 00:13:57.798 NVM Sets: Not Supported 00:13:57.799 Read Recovery Levels: Not Supported 00:13:57.799 Endurance Groups: Not Supported 00:13:57.799 Predictable Latency Mode: Not Supported 00:13:57.799 Traffic Based Keep ALive: Not Supported 00:13:57.799 Namespace Granularity: Not Supported 00:13:57.799 SQ Associations: Not Supported 00:13:57.799 UUID List: Not Supported 00:13:57.799 Multi-Domain Subsystem: Not Supported 00:13:57.799 Fixed Capacity Management: Not Supported 00:13:57.799 Variable Capacity Management: Not Supported 00:13:57.799 Delete Endurance Group: Not Supported 00:13:57.799 Delete NVM Set: Not Supported 00:13:57.799 Extended LBA Formats Supported: Supported 00:13:57.799 Flexible Data Placement Supported: Not Supported 00:13:57.799 00:13:57.799 Controller Memory Buffer Support 00:13:57.799 ================================ 00:13:57.799 Supported: No 00:13:57.799 00:13:57.799 Persistent Memory Region Support 00:13:57.799 ================================ 00:13:57.799 Supported: No 00:13:57.799 00:13:57.799 Admin Command Set Attributes 00:13:57.799 ============================ 00:13:57.799 Security Send/Receive: Not Supported 00:13:57.799 Format NVM: Supported 00:13:57.799 Firmware Activate/Download: Not Supported 00:13:57.799 Namespace Management: Supported 00:13:57.799 Device Self-Test: Not Supported 00:13:57.799 Directives: Supported 00:13:57.799 NVMe-MI: Not Supported 00:13:57.799 Virtualization Management: Not Supported 00:13:57.799 Doorbell Buffer Config: Supported 00:13:57.799 Get LBA Status Capability: Not Supported 00:13:57.799 Command & Feature Lockdown Capability: Not Supported 00:13:57.799 Abort Command Limit: 4 00:13:57.799 Async Event Request Limit: 4 00:13:57.799 Number of Firmware Slots: N/A 00:13:57.799 Firmware Slot 1 Read-Only: N/A 00:13:57.799 Firmware Activation Without Reset: N/A 00:13:57.799 Multiple Update Detection Support: N/A 00:13:57.799 Firmware Update Granularity: No Information Provided 00:13:57.799 Per-Namespace SMART Log: Yes 00:13:57.799 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.799 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:13:57.799 Command Effects Log Page: Supported 00:13:57.799 Get Log Page Extended Data: Supported 00:13:57.799 Telemetry Log Pages: Not Supported 00:13:57.799 Persistent Event Log Pages: Not Supported 00:13:57.799 Supported Log Pages Log Page: May Support 00:13:57.799 Commands Supported & Effects Log Page: Not Supported 00:13:57.799 Feature Identifiers & Effects Log Page:May Support 00:13:57.799 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.799 Data Area 4 for Telemetry Log: Not Supported 00:13:57.799 Error Log Page Entries Supported: 1 00:13:57.799 Keep Alive: Not Supported 00:13:57.799 00:13:57.799 NVM Command Set Attributes 00:13:57.799 ========================== 00:13:57.799 Submission Queue Entry Size 00:13:57.799 Max: 64 00:13:57.799 Min: 64 00:13:57.799 Completion Queue Entry Size 00:13:57.799 Max: 16 00:13:57.799 Min: 16 00:13:57.799 Number of Namespaces: 256 00:13:57.799 Compare Command: Supported 00:13:57.799 Write Uncorrectable Command: Not Supported 00:13:57.799 Dataset Management Command: Supported 00:13:57.799 Write Zeroes Command: Supported 00:13:57.799 Set Features Save Field: Supported 00:13:57.799 Reservations: Not Supported 00:13:57.799 Timestamp: Supported 00:13:57.799 Copy: Supported 00:13:57.799 Volatile Write Cache: Present 00:13:57.799 Atomic Write Unit (Normal): 1 00:13:57.799 Atomic Write Unit (PFail): 1 00:13:57.799 Atomic Compare & Write Unit: 1 00:13:57.799 Fused Compare & Write: Not Supported 00:13:57.799 Scatter-Gather List 00:13:57.799 SGL Command Set: Supported 00:13:57.799 SGL Keyed: Not Supported 00:13:57.799 SGL Bit Bucket Descriptor: Not Supported 00:13:57.799 SGL Metadata Pointer: Not Supported 00:13:57.799 Oversized SGL: Not Supported 00:13:57.799 SGL Metadata Address: Not Supported 00:13:57.799 SGL Offset: Not Supported 00:13:57.799 Transport SGL Data Block: Not Supported 00:13:57.799 Replay Protected Memory Block: Not Supported 00:13:57.799 00:13:57.799 Firmware Slot Information 00:13:57.799 ========================= 00:13:57.799 Active slot: 1 00:13:57.799 Slot 1 Firmware Revision: 1.0 00:13:57.799 00:13:57.799 00:13:57.799 Commands Supported and Effects 00:13:57.799 ============================== 00:13:57.799 Admin Commands 00:13:57.799 -------------- 00:13:57.799 Delete I/O Submission Queue (00h): Supported 00:13:57.799 Create I/O Submission Queue (01h): Supported 00:13:57.799 Get Log Page (02h): Supported 00:13:57.799 Delete I/O Completion Queue (04h): Supported 00:13:57.799 Create I/O Completion Queue (05h): Supported 00:13:57.799 Identify (06h): Supported 00:13:57.799 Abort (08h): Supported 00:13:57.799 Set Features (09h): Supported 00:13:57.799 Get Features (0Ah): Supported 00:13:57.799 Asynchronous Event Request (0Ch): Supported 00:13:57.799 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:57.799 Directive Send (19h): Supported 00:13:57.799 Directive Receive (1Ah): Supported 00:13:57.799 Virtualization Management (1Ch): Supported 00:13:57.799 Doorbell Buffer Config (7Ch): Supported 00:13:57.799 Format NVM (80h): Supported LBA-Change 00:13:57.799 I/O Commands 00:13:57.799 ------------ 00:13:57.799 Flush (00h): Supported LBA-Change 00:13:57.799 Write (01h): Supported LBA-Change 00:13:57.799 Read (02h): Supported 00:13:57.799 Compare (05h): Supported 00:13:57.799 Write Zeroes (08h): Supported LBA-Change 00:13:57.799 Dataset Management (09h): Supported LBA-Change 00:13:57.799 Unknown (0Ch): Supported 00:13:57.799 Unknown (12h): Supported 00:13:57.799 Copy (19h): Supported LBA-Change 00:13:57.799 Unknown (1Dh): Supported LBA-Change 00:13:57.799 00:13:57.799 Error Log 00:13:57.799 ========= 00:13:57.799 00:13:57.799 Arbitration 00:13:57.799 =========== 00:13:57.799 Arbitration Burst: no limit 00:13:57.799 00:13:57.799 Power Management 00:13:57.799 ================ 00:13:57.799 Number of Power States: 1 00:13:57.799 Current Power State: Power State #0 00:13:57.799 Power State #0: 00:13:57.799 Max Power: 25.00 W 00:13:57.799 Non-Operational State: Operational 00:13:57.799 Entry Latency: 16 microseconds 00:13:57.799 Exit Latency: 4 microseconds 00:13:57.799 Relative Read Throughput: 0 00:13:57.799 Relative Read Latency: 0 00:13:57.799 Relative Write Throughput: 0 00:13:57.799 Relative Write Latency: 0 00:13:57.799 Idle Power: Not Reported 00:13:57.799 Active Power: Not Reported 00:13:57.799 Non-Operational Permissive Mode: Not Supported 00:13:57.799 00:13:57.799 Health Information 00:13:57.799 ================== 00:13:57.799 Critical Warnings: 00:13:57.799 Available Spare Space: OK 00:13:57.799 Temperature: OK 00:13:57.799 Device Reliability: OK 00:13:57.799 Read Only: No 00:13:57.799 Volatile Memory Backup: OK 00:13:57.799 Current Temperature: 323 Kelvin (50 Celsius) 00:13:57.799 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:57.799 Available Spare: 0% 00:13:57.799 Available Spare Threshold: 0% 00:13:57.799 Life Percentage Used: 0% 00:13:57.799 Data Units Read: 2272 00:13:57.799 Data Units Written: 2059 00:13:57.799 Host Read Commands: 111471 00:13:57.799 Host Write Commands: 109740 00:13:57.799 Controller Busy Time: 0 minutes 00:13:57.799 Power Cycles: 0 00:13:57.799 Power On Hours: 0 hours 00:13:57.799 Unsafe Shutdowns: 0 00:13:57.799 Unrecoverable Media Errors: 0 00:13:57.799 Lifetime Error Log Entries: 0 00:13:57.799 Warning Temperature Time: 0 minutes 00:13:57.799 Critical Temperature Time: 0 minutes 00:13:57.799 00:13:57.799 Number of Queues 00:13:57.799 ================ 00:13:57.799 Number of I/O Submission Queues: 64 00:13:57.799 Number of I/O Completion Queues: 64 00:13:57.799 00:13:57.799 ZNS Specific Controller Data 00:13:57.799 ============================ 00:13:57.799 Zone Append Size Limit: 0 00:13:57.799 00:13:57.799 00:13:57.799 Active Namespaces 00:13:57.799 ================= 00:13:57.799 Namespace ID:1 00:13:57.799 Error Recovery Timeout: Unlimited 00:13:57.799 Command Set Identifier: NVM (00h) 00:13:57.799 Deallocate: Supported 00:13:57.799 Deallocated/Unwritten Error: Supported 00:13:57.799 Deallocated Read Value: All 0x00 00:13:57.799 Deallocate in Write Zeroes: Not Supported 00:13:57.799 Deallocated Guard Field: 0xFFFF 00:13:57.799 Flush: Supported 00:13:57.799 Reservation: Not Supported 00:13:57.799 Namespace Sharing Capabilities: Private 00:13:57.799 Size (in LBAs): 1048576 (4GiB) 00:13:57.799 Capacity (in LBAs): 1048576 (4GiB) 00:13:57.799 Utilization (in LBAs): 1048576 (4GiB) 00:13:57.799 Thin Provisioning: Not Supported 00:13:57.799 Per-NS Atomic Units: No 00:13:57.799 Maximum Single Source Range Length: 128 00:13:57.800 Maximum Copy Length: 128 00:13:57.800 Maximum Source Range Count: 128 00:13:57.800 NGUID/EUI64 Never Reused: No 00:13:57.800 Namespace Write Protected: No 00:13:57.800 Number of LBA Formats: 8 00:13:57.800 Current LBA Format: LBA Format #04 00:13:57.800 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.800 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.800 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.800 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.800 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.800 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.800 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.800 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.800 00:13:57.800 NVM Specific Namespace Data 00:13:57.800 =========================== 00:13:57.800 Logical Block Storage Tag Mask: 0 00:13:57.800 Protection Information Capabilities: 00:13:57.800 16b Guard Protection Information Storage Tag Support: No 00:13:57.800 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.800 Storage Tag Check Read Support: No 00:13:57.800 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Namespace ID:2 00:13:57.800 Error Recovery Timeout: Unlimited 00:13:57.800 Command Set Identifier: NVM (00h) 00:13:57.800 Deallocate: Supported 00:13:57.800 Deallocated/Unwritten Error: Supported 00:13:57.800 Deallocated Read Value: All 0x00 00:13:57.800 Deallocate in Write Zeroes: Not Supported 00:13:57.800 Deallocated Guard Field: 0xFFFF 00:13:57.800 Flush: Supported 00:13:57.800 Reservation: Not Supported 00:13:57.800 Namespace Sharing Capabilities: Private 00:13:57.800 Size (in LBAs): 1048576 (4GiB) 00:13:57.800 Capacity (in LBAs): 1048576 (4GiB) 00:13:57.800 Utilization (in LBAs): 1048576 (4GiB) 00:13:57.800 Thin Provisioning: Not Supported 00:13:57.800 Per-NS Atomic Units: No 00:13:57.800 Maximum Single Source Range Length: 128 00:13:57.800 Maximum Copy Length: 128 00:13:57.800 Maximum Source Range Count: 128 00:13:57.800 NGUID/EUI64 Never Reused: No 00:13:57.800 Namespace Write Protected: No 00:13:57.800 Number of LBA Formats: 8 00:13:57.800 Current LBA Format: LBA Format #04 00:13:57.800 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.800 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.800 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.800 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.800 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.800 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.800 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.800 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.800 00:13:57.800 NVM Specific Namespace Data 00:13:57.800 =========================== 00:13:57.800 Logical Block Storage Tag Mask: 0 00:13:57.800 Protection Information Capabilities: 00:13:57.800 16b Guard Protection Information Storage Tag Support: No 00:13:57.800 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.800 Storage Tag Check Read Support: No 00:13:57.800 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Namespace ID:3 00:13:57.800 Error Recovery Timeout: Unlimited 00:13:57.800 Command Set Identifier: NVM (00h) 00:13:57.800 Deallocate: Supported 00:13:57.800 Deallocated/Unwritten Error: Supported 00:13:57.800 Deallocated Read Value: All 0x00 00:13:57.800 Deallocate in Write Zeroes: Not Supported 00:13:57.800 Deallocated Guard Field: 0xFFFF 00:13:57.800 Flush: Supported 00:13:57.800 Reservation: Not Supported 00:13:57.800 Namespace Sharing Capabilities: Private 00:13:57.800 Size (in LBAs): 1048576 (4GiB) 00:13:57.800 Capacity (in LBAs): 1048576 (4GiB) 00:13:57.800 Utilization (in LBAs): 1048576 (4GiB) 00:13:57.800 Thin Provisioning: Not Supported 00:13:57.800 Per-NS Atomic Units: No 00:13:57.800 Maximum Single Source Range Length: 128 00:13:57.800 Maximum Copy Length: 128 00:13:57.800 Maximum Source Range Count: 128 00:13:57.800 NGUID/EUI64 Never Reused: No 00:13:57.800 Namespace Write Protected: No 00:13:57.800 Number of LBA Formats: 8 00:13:57.800 Current LBA Format: LBA Format #04 00:13:57.800 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.800 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.800 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.800 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.800 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.800 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.800 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.800 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.800 00:13:57.800 NVM Specific Namespace Data 00:13:57.800 =========================== 00:13:57.800 Logical Block Storage Tag Mask: 0 00:13:57.800 Protection Information Capabilities: 00:13:57.800 16b Guard Protection Information Storage Tag Support: No 00:13:57.800 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.800 Storage Tag Check Read Support: No 00:13:57.800 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.800 12:44:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:13:57.800 12:44:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:13:57.800 ===================================================== 00:13:57.800 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:57.800 ===================================================== 00:13:57.800 Controller Capabilities/Features 00:13:57.800 ================================ 00:13:57.800 Vendor ID: 1b36 00:13:57.800 Subsystem Vendor ID: 1af4 00:13:57.800 Serial Number: 12343 00:13:57.800 Model Number: QEMU NVMe Ctrl 00:13:57.800 Firmware Version: 8.0.0 00:13:57.800 Recommended Arb Burst: 6 00:13:57.800 IEEE OUI Identifier: 00 54 52 00:13:57.800 Multi-path I/O 00:13:57.800 May have multiple subsystem ports: No 00:13:57.800 May have multiple controllers: Yes 00:13:57.800 Associated with SR-IOV VF: No 00:13:57.800 Max Data Transfer Size: 524288 00:13:57.800 Max Number of Namespaces: 256 00:13:57.800 Max Number of I/O Queues: 64 00:13:57.800 NVMe Specification Version (VS): 1.4 00:13:57.800 NVMe Specification Version (Identify): 1.4 00:13:57.800 Maximum Queue Entries: 2048 00:13:57.800 Contiguous Queues Required: Yes 00:13:57.800 Arbitration Mechanisms Supported 00:13:57.800 Weighted Round Robin: Not Supported 00:13:57.800 Vendor Specific: Not Supported 00:13:57.800 Reset Timeout: 7500 ms 00:13:57.800 Doorbell Stride: 4 bytes 00:13:57.800 NVM Subsystem Reset: Not Supported 00:13:57.800 Command Sets Supported 00:13:57.800 NVM Command Set: Supported 00:13:57.801 Boot Partition: Not Supported 00:13:57.801 Memory Page Size Minimum: 4096 bytes 00:13:57.801 Memory Page Size Maximum: 65536 bytes 00:13:57.801 Persistent Memory Region: Not Supported 00:13:57.801 Optional Asynchronous Events Supported 00:13:57.801 Namespace Attribute Notices: Supported 00:13:57.801 Firmware Activation Notices: Not Supported 00:13:57.801 ANA Change Notices: Not Supported 00:13:57.801 PLE Aggregate Log Change Notices: Not Supported 00:13:57.801 LBA Status Info Alert Notices: Not Supported 00:13:57.801 EGE Aggregate Log Change Notices: Not Supported 00:13:57.801 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.801 Zone Descriptor Change Notices: Not Supported 00:13:57.801 Discovery Log Change Notices: Not Supported 00:13:57.801 Controller Attributes 00:13:57.801 128-bit Host Identifier: Not Supported 00:13:57.801 Non-Operational Permissive Mode: Not Supported 00:13:57.801 NVM Sets: Not Supported 00:13:57.801 Read Recovery Levels: Not Supported 00:13:57.801 Endurance Groups: Supported 00:13:57.801 Predictable Latency Mode: Not Supported 00:13:57.801 Traffic Based Keep ALive: Not Supported 00:13:57.801 Namespace Granularity: Not Supported 00:13:57.801 SQ Associations: Not Supported 00:13:57.801 UUID List: Not Supported 00:13:57.801 Multi-Domain Subsystem: Not Supported 00:13:57.801 Fixed Capacity Management: Not Supported 00:13:57.801 Variable Capacity Management: Not Supported 00:13:57.801 Delete Endurance Group: Not Supported 00:13:57.801 Delete NVM Set: Not Supported 00:13:57.801 Extended LBA Formats Supported: Supported 00:13:57.801 Flexible Data Placement Supported: Supported 00:13:57.801 00:13:57.801 Controller Memory Buffer Support 00:13:57.801 ================================ 00:13:57.801 Supported: No 00:13:57.801 00:13:57.801 Persistent Memory Region Support 00:13:57.801 ================================ 00:13:57.801 Supported: No 00:13:57.801 00:13:57.801 Admin Command Set Attributes 00:13:57.801 ============================ 00:13:57.801 Security Send/Receive: Not Supported 00:13:57.801 Format NVM: Supported 00:13:57.801 Firmware Activate/Download: Not Supported 00:13:57.801 Namespace Management: Supported 00:13:57.801 Device Self-Test: Not Supported 00:13:57.801 Directives: Supported 00:13:57.801 NVMe-MI: Not Supported 00:13:57.801 Virtualization Management: Not Supported 00:13:57.801 Doorbell Buffer Config: Supported 00:13:57.801 Get LBA Status Capability: Not Supported 00:13:57.801 Command & Feature Lockdown Capability: Not Supported 00:13:57.801 Abort Command Limit: 4 00:13:57.801 Async Event Request Limit: 4 00:13:57.801 Number of Firmware Slots: N/A 00:13:57.801 Firmware Slot 1 Read-Only: N/A 00:13:57.801 Firmware Activation Without Reset: N/A 00:13:57.801 Multiple Update Detection Support: N/A 00:13:57.801 Firmware Update Granularity: No Information Provided 00:13:57.801 Per-Namespace SMART Log: Yes 00:13:57.801 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.801 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:13:57.801 Command Effects Log Page: Supported 00:13:57.801 Get Log Page Extended Data: Supported 00:13:57.801 Telemetry Log Pages: Not Supported 00:13:57.801 Persistent Event Log Pages: Not Supported 00:13:57.801 Supported Log Pages Log Page: May Support 00:13:57.801 Commands Supported & Effects Log Page: Not Supported 00:13:57.801 Feature Identifiers & Effects Log Page:May Support 00:13:57.801 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.801 Data Area 4 for Telemetry Log: Not Supported 00:13:57.801 Error Log Page Entries Supported: 1 00:13:57.801 Keep Alive: Not Supported 00:13:57.801 00:13:57.801 NVM Command Set Attributes 00:13:57.801 ========================== 00:13:57.801 Submission Queue Entry Size 00:13:57.801 Max: 64 00:13:57.801 Min: 64 00:13:57.801 Completion Queue Entry Size 00:13:57.801 Max: 16 00:13:57.801 Min: 16 00:13:57.801 Number of Namespaces: 256 00:13:57.801 Compare Command: Supported 00:13:57.801 Write Uncorrectable Command: Not Supported 00:13:57.801 Dataset Management Command: Supported 00:13:57.801 Write Zeroes Command: Supported 00:13:57.801 Set Features Save Field: Supported 00:13:57.801 Reservations: Not Supported 00:13:57.801 Timestamp: Supported 00:13:57.801 Copy: Supported 00:13:57.801 Volatile Write Cache: Present 00:13:57.801 Atomic Write Unit (Normal): 1 00:13:57.801 Atomic Write Unit (PFail): 1 00:13:57.801 Atomic Compare & Write Unit: 1 00:13:57.801 Fused Compare & Write: Not Supported 00:13:57.801 Scatter-Gather List 00:13:57.801 SGL Command Set: Supported 00:13:57.801 SGL Keyed: Not Supported 00:13:57.801 SGL Bit Bucket Descriptor: Not Supported 00:13:57.801 SGL Metadata Pointer: Not Supported 00:13:57.801 Oversized SGL: Not Supported 00:13:57.801 SGL Metadata Address: Not Supported 00:13:57.801 SGL Offset: Not Supported 00:13:57.801 Transport SGL Data Block: Not Supported 00:13:57.801 Replay Protected Memory Block: Not Supported 00:13:57.801 00:13:57.801 Firmware Slot Information 00:13:57.801 ========================= 00:13:57.801 Active slot: 1 00:13:57.801 Slot 1 Firmware Revision: 1.0 00:13:57.801 00:13:57.801 00:13:57.801 Commands Supported and Effects 00:13:57.801 ============================== 00:13:57.801 Admin Commands 00:13:57.801 -------------- 00:13:57.801 Delete I/O Submission Queue (00h): Supported 00:13:57.801 Create I/O Submission Queue (01h): Supported 00:13:57.801 Get Log Page (02h): Supported 00:13:57.801 Delete I/O Completion Queue (04h): Supported 00:13:57.801 Create I/O Completion Queue (05h): Supported 00:13:57.801 Identify (06h): Supported 00:13:57.801 Abort (08h): Supported 00:13:57.801 Set Features (09h): Supported 00:13:57.801 Get Features (0Ah): Supported 00:13:57.801 Asynchronous Event Request (0Ch): Supported 00:13:57.801 Namespace Attachment (15h): Supported NS-Inventory-Change 00:13:57.801 Directive Send (19h): Supported 00:13:57.801 Directive Receive (1Ah): Supported 00:13:57.801 Virtualization Management (1Ch): Supported 00:13:57.801 Doorbell Buffer Config (7Ch): Supported 00:13:57.801 Format NVM (80h): Supported LBA-Change 00:13:57.801 I/O Commands 00:13:57.801 ------------ 00:13:57.801 Flush (00h): Supported LBA-Change 00:13:57.801 Write (01h): Supported LBA-Change 00:13:57.801 Read (02h): Supported 00:13:57.801 Compare (05h): Supported 00:13:57.801 Write Zeroes (08h): Supported LBA-Change 00:13:57.801 Dataset Management (09h): Supported LBA-Change 00:13:57.801 Unknown (0Ch): Supported 00:13:57.801 Unknown (12h): Supported 00:13:57.801 Copy (19h): Supported LBA-Change 00:13:57.801 Unknown (1Dh): Supported LBA-Change 00:13:57.801 00:13:57.801 Error Log 00:13:57.801 ========= 00:13:57.801 00:13:57.801 Arbitration 00:13:57.801 =========== 00:13:57.801 Arbitration Burst: no limit 00:13:57.801 00:13:57.801 Power Management 00:13:57.801 ================ 00:13:57.801 Number of Power States: 1 00:13:57.801 Current Power State: Power State #0 00:13:57.801 Power State #0: 00:13:57.801 Max Power: 25.00 W 00:13:57.801 Non-Operational State: Operational 00:13:57.801 Entry Latency: 16 microseconds 00:13:57.801 Exit Latency: 4 microseconds 00:13:57.801 Relative Read Throughput: 0 00:13:57.801 Relative Read Latency: 0 00:13:57.801 Relative Write Throughput: 0 00:13:57.801 Relative Write Latency: 0 00:13:57.801 Idle Power: Not Reported 00:13:57.801 Active Power: Not Reported 00:13:57.801 Non-Operational Permissive Mode: Not Supported 00:13:57.801 00:13:57.801 Health Information 00:13:57.801 ================== 00:13:57.801 Critical Warnings: 00:13:57.801 Available Spare Space: OK 00:13:57.801 Temperature: OK 00:13:57.801 Device Reliability: OK 00:13:57.801 Read Only: No 00:13:57.801 Volatile Memory Backup: OK 00:13:57.801 Current Temperature: 323 Kelvin (50 Celsius) 00:13:57.801 Temperature Threshold: 343 Kelvin (70 Celsius) 00:13:57.801 Available Spare: 0% 00:13:57.801 Available Spare Threshold: 0% 00:13:57.801 Life Percentage Used: 0% 00:13:57.801 Data Units Read: 795 00:13:57.801 Data Units Written: 724 00:13:57.801 Host Read Commands: 37607 00:13:57.801 Host Write Commands: 37030 00:13:57.801 Controller Busy Time: 0 minutes 00:13:57.801 Power Cycles: 0 00:13:57.801 Power On Hours: 0 hours 00:13:57.801 Unsafe Shutdowns: 0 00:13:57.801 Unrecoverable Media Errors: 0 00:13:57.801 Lifetime Error Log Entries: 0 00:13:57.801 Warning Temperature Time: 0 minutes 00:13:57.801 Critical Temperature Time: 0 minutes 00:13:57.801 00:13:57.801 Number of Queues 00:13:57.801 ================ 00:13:57.802 Number of I/O Submission Queues: 64 00:13:57.802 Number of I/O Completion Queues: 64 00:13:57.802 00:13:57.802 ZNS Specific Controller Data 00:13:57.802 ============================ 00:13:57.802 Zone Append Size Limit: 0 00:13:57.802 00:13:57.802 00:13:57.802 Active Namespaces 00:13:57.802 ================= 00:13:57.802 Namespace ID:1 00:13:57.802 Error Recovery Timeout: Unlimited 00:13:57.802 Command Set Identifier: NVM (00h) 00:13:57.802 Deallocate: Supported 00:13:57.802 Deallocated/Unwritten Error: Supported 00:13:57.802 Deallocated Read Value: All 0x00 00:13:57.802 Deallocate in Write Zeroes: Not Supported 00:13:57.802 Deallocated Guard Field: 0xFFFF 00:13:57.802 Flush: Supported 00:13:57.802 Reservation: Not Supported 00:13:57.802 Namespace Sharing Capabilities: Multiple Controllers 00:13:57.802 Size (in LBAs): 262144 (1GiB) 00:13:57.802 Capacity (in LBAs): 262144 (1GiB) 00:13:57.802 Utilization (in LBAs): 262144 (1GiB) 00:13:57.802 Thin Provisioning: Not Supported 00:13:57.802 Per-NS Atomic Units: No 00:13:57.802 Maximum Single Source Range Length: 128 00:13:57.802 Maximum Copy Length: 128 00:13:57.802 Maximum Source Range Count: 128 00:13:57.802 NGUID/EUI64 Never Reused: No 00:13:57.802 Namespace Write Protected: No 00:13:57.802 Endurance group ID: 1 00:13:57.802 Number of LBA Formats: 8 00:13:57.802 Current LBA Format: LBA Format #04 00:13:57.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.802 LBA Format #01: Data Size: 512 Metadata Size: 8 00:13:57.802 LBA Format #02: Data Size: 512 Metadata Size: 16 00:13:57.802 LBA Format #03: Data Size: 512 Metadata Size: 64 00:13:57.802 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:13:57.802 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:13:57.802 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:13:57.802 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:13:57.802 00:13:57.802 Get Feature FDP: 00:13:57.802 ================ 00:13:57.802 Enabled: Yes 00:13:57.802 FDP configuration index: 0 00:13:57.802 00:13:57.802 FDP configurations log page 00:13:57.802 =========================== 00:13:57.802 Number of FDP configurations: 1 00:13:57.802 Version: 0 00:13:57.802 Size: 112 00:13:57.802 FDP Configuration Descriptor: 0 00:13:57.802 Descriptor Size: 96 00:13:57.802 Reclaim Group Identifier format: 2 00:13:57.802 FDP Volatile Write Cache: Not Present 00:13:57.802 FDP Configuration: Valid 00:13:57.802 Vendor Specific Size: 0 00:13:57.802 Number of Reclaim Groups: 2 00:13:57.802 Number of Recalim Unit Handles: 8 00:13:57.802 Max Placement Identifiers: 128 00:13:57.802 Number of Namespaces Suppprted: 256 00:13:57.802 Reclaim unit Nominal Size: 6000000 bytes 00:13:57.802 Estimated Reclaim Unit Time Limit: Not Reported 00:13:57.802 RUH Desc #000: RUH Type: Initially Isolated 00:13:57.802 RUH Desc #001: RUH Type: Initially Isolated 00:13:57.802 RUH Desc #002: RUH Type: Initially Isolated 00:13:57.802 RUH Desc #003: RUH Type: Initially Isolated 00:13:57.802 RUH Desc #004: RUH Type: Initially Isolated 00:13:57.802 RUH Desc #005: RUH Type: Initially Isolated 00:13:57.802 RUH Desc #006: RUH Type: Initially Isolated 00:13:57.802 RUH Desc #007: RUH Type: Initially Isolated 00:13:57.802 00:13:57.802 FDP reclaim unit handle usage log page 00:13:57.802 ====================================== 00:13:57.802 Number of Reclaim Unit Handles: 8 00:13:57.802 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:57.802 RUH Usage Desc #001: RUH Attributes: Unused 00:13:57.802 RUH Usage Desc #002: RUH Attributes: Unused 00:13:57.802 RUH Usage Desc #003: RUH Attributes: Unused 00:13:57.802 RUH Usage Desc #004: RUH Attributes: Unused 00:13:57.802 RUH Usage Desc #005: RUH Attributes: Unused 00:13:57.802 RUH Usage Desc #006: RUH Attributes: Unused 00:13:57.802 RUH Usage Desc #007: RUH Attributes: Unused 00:13:57.802 00:13:57.802 FDP statistics log page 00:13:57.802 ======================= 00:13:57.802 Host bytes with metadata written: 440115200 00:13:57.802 Media bytes with metadata written: 440188928 00:13:57.802 Media bytes erased: 0 00:13:57.802 00:13:57.802 FDP events log page 00:13:57.802 =================== 00:13:57.802 Number of FDP events: 0 00:13:57.802 00:13:57.802 NVM Specific Namespace Data 00:13:57.802 =========================== 00:13:57.802 Logical Block Storage Tag Mask: 0 00:13:57.802 Protection Information Capabilities: 00:13:57.802 16b Guard Protection Information Storage Tag Support: No 00:13:57.802 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:13:57.802 Storage Tag Check Read Support: No 00:13:57.802 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.802 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.802 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.802 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.802 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.802 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.802 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.802 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:13:57.802 00:13:57.802 real 0m1.127s 00:13:57.802 user 0m0.397s 00:13:57.802 sys 0m0.537s 00:13:57.802 12:44:57 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.802 12:44:57 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:13:57.802 ************************************ 00:13:57.802 END TEST nvme_identify 00:13:57.802 ************************************ 00:13:58.059 12:44:57 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:13:58.059 12:44:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.059 12:44:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.059 12:44:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.059 ************************************ 00:13:58.059 START TEST nvme_perf 00:13:58.059 ************************************ 00:13:58.059 12:44:57 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:13:58.059 12:44:57 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:13:59.434 Initializing NVMe Controllers 00:13:59.434 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:59.434 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:59.434 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:59.434 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:59.434 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:59.434 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:59.434 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:59.434 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:59.434 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:59.434 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:59.434 Initialization complete. Launching workers. 00:13:59.434 ======================================================== 00:13:59.434 Latency(us) 00:13:59.434 Device Information : IOPS MiB/s Average min max 00:13:59.435 PCIE (0000:00:10.0) NSID 1 from core 0: 17833.65 208.99 7179.14 5971.27 24772.81 00:13:59.435 PCIE (0000:00:11.0) NSID 1 from core 0: 17833.65 208.99 7173.51 6071.47 23879.23 00:13:59.435 PCIE (0000:00:13.0) NSID 1 from core 0: 17833.65 208.99 7166.47 5251.19 23513.85 00:13:59.435 PCIE (0000:00:12.0) NSID 1 from core 0: 17833.65 208.99 7159.43 4945.05 22884.82 00:13:59.435 PCIE (0000:00:12.0) NSID 2 from core 0: 17833.65 208.99 7152.29 4715.09 22220.50 00:13:59.435 PCIE (0000:00:12.0) NSID 3 from core 0: 17833.65 208.99 7145.14 4167.90 21536.79 00:13:59.435 ======================================================== 00:13:59.435 Total : 107001.93 1253.93 7162.66 4167.90 24772.81 00:13:59.435 00:13:59.435 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:59.435 ================================================================================= 00:13:59.435 1.00000% : 6125.095us 00:13:59.435 10.00000% : 6301.538us 00:13:59.435 25.00000% : 6553.600us 00:13:59.435 50.00000% : 6856.074us 00:13:59.435 75.00000% : 7208.960us 00:13:59.435 90.00000% : 8065.969us 00:13:59.435 95.00000% : 9880.812us 00:13:59.435 98.00000% : 11090.708us 00:13:59.435 99.00000% : 11746.068us 00:13:59.435 99.50000% : 19358.326us 00:13:59.435 99.90000% : 24399.557us 00:13:59.435 99.99000% : 24802.855us 00:13:59.435 99.99900% : 24802.855us 00:13:59.435 99.99990% : 24802.855us 00:13:59.435 99.99999% : 24802.855us 00:13:59.435 00:13:59.435 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:59.435 ================================================================================= 00:13:59.435 1.00000% : 6200.714us 00:13:59.435 10.00000% : 6351.951us 00:13:59.435 25.00000% : 6553.600us 00:13:59.435 50.00000% : 6856.074us 00:13:59.435 75.00000% : 7158.548us 00:13:59.435 90.00000% : 8166.794us 00:13:59.435 95.00000% : 9931.225us 00:13:59.435 98.00000% : 11141.120us 00:13:59.435 99.00000% : 11796.480us 00:13:59.435 99.50000% : 18753.378us 00:13:59.435 99.90000% : 23592.960us 00:13:59.435 99.99000% : 23895.434us 00:13:59.435 99.99900% : 23895.434us 00:13:59.435 99.99990% : 23895.434us 00:13:59.435 99.99999% : 23895.434us 00:13:59.435 00:13:59.435 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:59.435 ================================================================================= 00:13:59.435 1.00000% : 6200.714us 00:13:59.435 10.00000% : 6351.951us 00:13:59.435 25.00000% : 6553.600us 00:13:59.435 50.00000% : 6856.074us 00:13:59.435 75.00000% : 7158.548us 00:13:59.435 90.00000% : 8015.557us 00:13:59.435 95.00000% : 9830.400us 00:13:59.435 98.00000% : 10989.883us 00:13:59.435 99.00000% : 11846.892us 00:13:59.435 99.50000% : 18450.905us 00:13:59.435 99.90000% : 23290.486us 00:13:59.435 99.99000% : 23492.135us 00:13:59.435 99.99900% : 23592.960us 00:13:59.435 99.99990% : 23592.960us 00:13:59.435 99.99999% : 23592.960us 00:13:59.435 00:13:59.435 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:59.435 ================================================================================= 00:13:59.435 1.00000% : 6175.508us 00:13:59.435 10.00000% : 6351.951us 00:13:59.435 25.00000% : 6553.600us 00:13:59.435 50.00000% : 6856.074us 00:13:59.435 75.00000% : 7158.548us 00:13:59.435 90.00000% : 7914.732us 00:13:59.435 95.00000% : 9779.988us 00:13:59.435 98.00000% : 11040.295us 00:13:59.435 99.00000% : 11897.305us 00:13:59.435 99.50000% : 17745.132us 00:13:59.435 99.90000% : 22584.714us 00:13:59.435 99.99000% : 22887.188us 00:13:59.435 99.99900% : 22887.188us 00:13:59.435 99.99990% : 22887.188us 00:13:59.435 99.99999% : 22887.188us 00:13:59.435 00:13:59.435 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:59.435 ================================================================================= 00:13:59.435 1.00000% : 6175.508us 00:13:59.435 10.00000% : 6351.951us 00:13:59.435 25.00000% : 6604.012us 00:13:59.435 50.00000% : 6856.074us 00:13:59.435 75.00000% : 7158.548us 00:13:59.435 90.00000% : 7864.320us 00:13:59.435 95.00000% : 9830.400us 00:13:59.435 98.00000% : 11090.708us 00:13:59.435 99.00000% : 11796.480us 00:13:59.435 99.50000% : 17140.185us 00:13:59.435 99.90000% : 21979.766us 00:13:59.435 99.99000% : 22282.240us 00:13:59.435 99.99900% : 22282.240us 00:13:59.435 99.99990% : 22282.240us 00:13:59.435 99.99999% : 22282.240us 00:13:59.435 00:13:59.435 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:59.435 ================================================================================= 00:13:59.435 1.00000% : 6200.714us 00:13:59.435 10.00000% : 6351.951us 00:13:59.435 25.00000% : 6553.600us 00:13:59.435 50.00000% : 6856.074us 00:13:59.435 75.00000% : 7158.548us 00:13:59.435 90.00000% : 7864.320us 00:13:59.435 95.00000% : 9880.812us 00:13:59.435 98.00000% : 11090.708us 00:13:59.435 99.00000% : 11746.068us 00:13:59.435 99.50000% : 16434.412us 00:13:59.435 99.90000% : 21273.994us 00:13:59.435 99.99000% : 21576.468us 00:13:59.435 99.99900% : 21576.468us 00:13:59.435 99.99990% : 21576.468us 00:13:59.435 99.99999% : 21576.468us 00:13:59.435 00:13:59.435 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:13:59.435 ============================================================================== 00:13:59.435 Range in us Cumulative IO count 00:13:59.435 5948.652 - 5973.858: 0.0056% ( 1) 00:13:59.435 5973.858 - 5999.065: 0.0280% ( 4) 00:13:59.435 5999.065 - 6024.271: 0.0896% ( 11) 00:13:59.435 6024.271 - 6049.477: 0.2296% ( 25) 00:13:59.435 6049.477 - 6074.683: 0.4480% ( 39) 00:13:59.435 6074.683 - 6099.889: 0.7224% ( 49) 00:13:59.435 6099.889 - 6125.095: 1.2937% ( 102) 00:13:59.435 6125.095 - 6150.302: 2.1281% ( 149) 00:13:59.435 6150.302 - 6175.508: 3.2258% ( 196) 00:13:59.435 6175.508 - 6200.714: 4.6763% ( 259) 00:13:59.435 6200.714 - 6225.920: 6.0204% ( 240) 00:13:59.435 6225.920 - 6251.126: 7.5549% ( 274) 00:13:59.435 6251.126 - 6276.332: 9.0558% ( 268) 00:13:59.435 6276.332 - 6301.538: 10.5119% ( 260) 00:13:59.435 6301.538 - 6326.745: 12.0352% ( 272) 00:13:59.435 6326.745 - 6351.951: 13.5529% ( 271) 00:13:59.435 6351.951 - 6377.157: 15.2162% ( 297) 00:13:59.435 6377.157 - 6402.363: 16.7395% ( 272) 00:13:59.435 6402.363 - 6427.569: 18.3916% ( 295) 00:13:59.435 6427.569 - 6452.775: 20.0213% ( 291) 00:13:59.435 6452.775 - 6503.188: 23.5215% ( 625) 00:13:59.435 6503.188 - 6553.600: 27.2121% ( 659) 00:13:59.435 6553.600 - 6604.012: 30.9588% ( 669) 00:13:59.435 6604.012 - 6654.425: 34.7278% ( 673) 00:13:59.435 6654.425 - 6704.837: 38.6929% ( 708) 00:13:59.435 6704.837 - 6755.249: 42.5627% ( 691) 00:13:59.435 6755.249 - 6805.662: 46.4830% ( 700) 00:13:59.435 6805.662 - 6856.074: 50.4032% ( 700) 00:13:59.435 6856.074 - 6906.486: 54.3851% ( 711) 00:13:59.435 6906.486 - 6956.898: 58.4397% ( 724) 00:13:59.435 6956.898 - 7007.311: 62.4944% ( 724) 00:13:59.435 7007.311 - 7057.723: 66.4259% ( 702) 00:13:59.435 7057.723 - 7108.135: 70.3125% ( 694) 00:13:59.435 7108.135 - 7158.548: 74.0255% ( 663) 00:13:59.435 7158.548 - 7208.960: 77.3746% ( 598) 00:13:59.435 7208.960 - 7259.372: 79.9283% ( 456) 00:13:59.435 7259.372 - 7309.785: 81.7708% ( 329) 00:13:59.435 7309.785 - 7360.197: 83.1989% ( 255) 00:13:59.435 7360.197 - 7410.609: 84.2518% ( 188) 00:13:59.435 7410.609 - 7461.022: 85.2543% ( 179) 00:13:59.435 7461.022 - 7511.434: 86.1447% ( 159) 00:13:59.435 7511.434 - 7561.846: 86.9680% ( 147) 00:13:59.435 7561.846 - 7612.258: 87.6008% ( 113) 00:13:59.435 7612.258 - 7662.671: 88.0880% ( 87) 00:13:59.435 7662.671 - 7713.083: 88.5025% ( 74) 00:13:59.435 7713.083 - 7763.495: 88.8273% ( 58) 00:13:59.435 7763.495 - 7813.908: 89.1129% ( 51) 00:13:59.435 7813.908 - 7864.320: 89.3201% ( 37) 00:13:59.435 7864.320 - 7914.732: 89.4993% ( 32) 00:13:59.435 7914.732 - 7965.145: 89.7233% ( 40) 00:13:59.435 7965.145 - 8015.557: 89.8858% ( 29) 00:13:59.435 8015.557 - 8065.969: 90.0202% ( 24) 00:13:59.435 8065.969 - 8116.382: 90.1994% ( 32) 00:13:59.435 8116.382 - 8166.794: 90.3506% ( 27) 00:13:59.435 8166.794 - 8217.206: 90.5018% ( 27) 00:13:59.435 8217.206 - 8267.618: 90.6586% ( 28) 00:13:59.435 8267.618 - 8318.031: 90.7874% ( 23) 00:13:59.435 8318.031 - 8368.443: 90.9386% ( 27) 00:13:59.435 8368.443 - 8418.855: 91.0730% ( 24) 00:13:59.435 8418.855 - 8469.268: 91.2410% ( 30) 00:13:59.435 8469.268 - 8519.680: 91.3754% ( 24) 00:13:59.435 8519.680 - 8570.092: 91.5379% ( 29) 00:13:59.435 8570.092 - 8620.505: 91.7283% ( 34) 00:13:59.435 8620.505 - 8670.917: 91.8403% ( 20) 00:13:59.435 8670.917 - 8721.329: 92.0195% ( 32) 00:13:59.435 8721.329 - 8771.742: 92.1651% ( 26) 00:13:59.435 8771.742 - 8822.154: 92.3387% ( 31) 00:13:59.435 8822.154 - 8872.566: 92.4843% ( 26) 00:13:59.435 8872.566 - 8922.978: 92.6467% ( 29) 00:13:59.435 8922.978 - 8973.391: 92.7755% ( 23) 00:13:59.435 8973.391 - 9023.803: 92.9155% ( 25) 00:13:59.435 9023.803 - 9074.215: 93.0332% ( 21) 00:13:59.435 9074.215 - 9124.628: 93.1620% ( 23) 00:13:59.435 9124.628 - 9175.040: 93.2908% ( 23) 00:13:59.435 9175.040 - 9225.452: 93.4196% ( 23) 00:13:59.435 9225.452 - 9275.865: 93.5428% ( 22) 00:13:59.435 9275.865 - 9326.277: 93.6660% ( 22) 00:13:59.435 9326.277 - 9376.689: 93.7948% ( 23) 00:13:59.435 9376.689 - 9427.102: 93.9292% ( 24) 00:13:59.435 9427.102 - 9477.514: 94.0412% ( 20) 00:13:59.436 9477.514 - 9527.926: 94.1588% ( 21) 00:13:59.436 9527.926 - 9578.338: 94.3156% ( 28) 00:13:59.436 9578.338 - 9628.751: 94.4612% ( 26) 00:13:59.436 9628.751 - 9679.163: 94.5677% ( 19) 00:13:59.436 9679.163 - 9729.575: 94.6909% ( 22) 00:13:59.436 9729.575 - 9779.988: 94.8085% ( 21) 00:13:59.436 9779.988 - 9830.400: 94.9093% ( 18) 00:13:59.436 9830.400 - 9880.812: 95.0437% ( 24) 00:13:59.436 9880.812 - 9931.225: 95.1445% ( 18) 00:13:59.436 9931.225 - 9981.637: 95.2341% ( 16) 00:13:59.436 9981.637 - 10032.049: 95.3461% ( 20) 00:13:59.436 10032.049 - 10082.462: 95.4749% ( 23) 00:13:59.436 10082.462 - 10132.874: 95.6093% ( 24) 00:13:59.436 10132.874 - 10183.286: 95.7437% ( 24) 00:13:59.436 10183.286 - 10233.698: 95.9173% ( 31) 00:13:59.436 10233.698 - 10284.111: 96.0405% ( 22) 00:13:59.436 10284.111 - 10334.523: 96.1582% ( 21) 00:13:59.436 10334.523 - 10384.935: 96.3094% ( 27) 00:13:59.436 10384.935 - 10435.348: 96.4270% ( 21) 00:13:59.436 10435.348 - 10485.760: 96.5614% ( 24) 00:13:59.436 10485.760 - 10536.172: 96.6790% ( 21) 00:13:59.436 10536.172 - 10586.585: 96.7910% ( 20) 00:13:59.436 10586.585 - 10636.997: 96.8862% ( 17) 00:13:59.436 10636.997 - 10687.409: 97.0542% ( 30) 00:13:59.436 10687.409 - 10737.822: 97.1326% ( 14) 00:13:59.436 10737.822 - 10788.234: 97.2782% ( 26) 00:13:59.436 10788.234 - 10838.646: 97.3958% ( 21) 00:13:59.436 10838.646 - 10889.058: 97.5302% ( 24) 00:13:59.436 10889.058 - 10939.471: 97.6591% ( 23) 00:13:59.436 10939.471 - 10989.883: 97.7655% ( 19) 00:13:59.436 10989.883 - 11040.295: 97.9055% ( 25) 00:13:59.436 11040.295 - 11090.708: 98.0231% ( 21) 00:13:59.436 11090.708 - 11141.120: 98.1295% ( 19) 00:13:59.436 11141.120 - 11191.532: 98.2079% ( 14) 00:13:59.436 11191.532 - 11241.945: 98.3031% ( 17) 00:13:59.436 11241.945 - 11292.357: 98.3759% ( 13) 00:13:59.436 11292.357 - 11342.769: 98.4655% ( 16) 00:13:59.436 11342.769 - 11393.182: 98.5383% ( 13) 00:13:59.436 11393.182 - 11443.594: 98.6055% ( 12) 00:13:59.436 11443.594 - 11494.006: 98.6895% ( 15) 00:13:59.436 11494.006 - 11544.418: 98.7679% ( 14) 00:13:59.436 11544.418 - 11594.831: 98.8463% ( 14) 00:13:59.436 11594.831 - 11645.243: 98.9247% ( 14) 00:13:59.436 11645.243 - 11695.655: 98.9975% ( 13) 00:13:59.436 11695.655 - 11746.068: 99.0479% ( 9) 00:13:59.436 11746.068 - 11796.480: 99.0759% ( 5) 00:13:59.436 11796.480 - 11846.892: 99.1207% ( 8) 00:13:59.436 11846.892 - 11897.305: 99.1487% ( 5) 00:13:59.436 11897.305 - 11947.717: 99.1599% ( 2) 00:13:59.436 11947.717 - 11998.129: 99.1711% ( 2) 00:13:59.436 11998.129 - 12048.542: 99.1767% ( 1) 00:13:59.436 12048.542 - 12098.954: 99.1879% ( 2) 00:13:59.436 12098.954 - 12149.366: 99.1991% ( 2) 00:13:59.436 12199.778 - 12250.191: 99.2159% ( 3) 00:13:59.436 12250.191 - 12300.603: 99.2216% ( 1) 00:13:59.436 12300.603 - 12351.015: 99.2272% ( 1) 00:13:59.436 12351.015 - 12401.428: 99.2384% ( 2) 00:13:59.436 12401.428 - 12451.840: 99.2440% ( 1) 00:13:59.436 12451.840 - 12502.252: 99.2496% ( 1) 00:13:59.436 12502.252 - 12552.665: 99.2608% ( 2) 00:13:59.436 12603.077 - 12653.489: 99.2776% ( 3) 00:13:59.436 12653.489 - 12703.902: 99.2832% ( 1) 00:13:59.436 18148.431 - 18249.255: 99.2888% ( 1) 00:13:59.436 18249.255 - 18350.080: 99.2944% ( 1) 00:13:59.436 18350.080 - 18450.905: 99.3168% ( 4) 00:13:59.436 18450.905 - 18551.729: 99.3336% ( 3) 00:13:59.436 18551.729 - 18652.554: 99.3504% ( 3) 00:13:59.436 18652.554 - 18753.378: 99.3728% ( 4) 00:13:59.436 18753.378 - 18854.203: 99.3896% ( 3) 00:13:59.436 18854.203 - 18955.028: 99.4120% ( 4) 00:13:59.436 18955.028 - 19055.852: 99.4400% ( 5) 00:13:59.436 19055.852 - 19156.677: 99.4624% ( 4) 00:13:59.436 19156.677 - 19257.502: 99.4848% ( 4) 00:13:59.436 19257.502 - 19358.326: 99.5128% ( 5) 00:13:59.436 19358.326 - 19459.151: 99.5464% ( 6) 00:13:59.436 19459.151 - 19559.975: 99.5688% ( 4) 00:13:59.436 19559.975 - 19660.800: 99.5968% ( 5) 00:13:59.436 19660.800 - 19761.625: 99.6304% ( 6) 00:13:59.436 19761.625 - 19862.449: 99.6360% ( 1) 00:13:59.436 19963.274 - 20064.098: 99.6416% ( 1) 00:13:59.436 23391.311 - 23492.135: 99.6584% ( 3) 00:13:59.436 23492.135 - 23592.960: 99.6920% ( 6) 00:13:59.436 23592.960 - 23693.785: 99.7200% ( 5) 00:13:59.436 23693.785 - 23794.609: 99.7424% ( 4) 00:13:59.436 23794.609 - 23895.434: 99.7704% ( 5) 00:13:59.436 23895.434 - 23996.258: 99.7928% ( 4) 00:13:59.436 23996.258 - 24097.083: 99.8208% ( 5) 00:13:59.436 24097.083 - 24197.908: 99.8488% ( 5) 00:13:59.436 24197.908 - 24298.732: 99.8768% ( 5) 00:13:59.436 24298.732 - 24399.557: 99.9048% ( 5) 00:13:59.436 24399.557 - 24500.382: 99.9272% ( 4) 00:13:59.436 24500.382 - 24601.206: 99.9608% ( 6) 00:13:59.436 24601.206 - 24702.031: 99.9832% ( 4) 00:13:59.436 24702.031 - 24802.855: 100.0000% ( 3) 00:13:59.436 00:13:59.436 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:13:59.436 ============================================================================== 00:13:59.436 Range in us Cumulative IO count 00:13:59.436 6049.477 - 6074.683: 0.0112% ( 2) 00:13:59.436 6074.683 - 6099.889: 0.0784% ( 12) 00:13:59.436 6099.889 - 6125.095: 0.1512% ( 13) 00:13:59.436 6125.095 - 6150.302: 0.3360% ( 33) 00:13:59.436 6150.302 - 6175.508: 0.7336% ( 71) 00:13:59.436 6175.508 - 6200.714: 1.2545% ( 93) 00:13:59.436 6200.714 - 6225.920: 2.0441% ( 141) 00:13:59.436 6225.920 - 6251.126: 3.1250% ( 193) 00:13:59.436 6251.126 - 6276.332: 4.6595% ( 274) 00:13:59.436 6276.332 - 6301.538: 6.3844% ( 308) 00:13:59.436 6301.538 - 6326.745: 8.0645% ( 300) 00:13:59.436 6326.745 - 6351.951: 10.0190% ( 349) 00:13:59.436 6351.951 - 6377.157: 11.8448% ( 326) 00:13:59.436 6377.157 - 6402.363: 13.7601% ( 342) 00:13:59.436 6402.363 - 6427.569: 15.5018% ( 311) 00:13:59.436 6427.569 - 6452.775: 17.3891% ( 337) 00:13:59.436 6452.775 - 6503.188: 21.2982% ( 698) 00:13:59.436 6503.188 - 6553.600: 25.2576% ( 707) 00:13:59.436 6553.600 - 6604.012: 29.4915% ( 756) 00:13:59.436 6604.012 - 6654.425: 33.9158% ( 790) 00:13:59.436 6654.425 - 6704.837: 38.3625% ( 794) 00:13:59.436 6704.837 - 6755.249: 42.8035% ( 793) 00:13:59.436 6755.249 - 6805.662: 47.4518% ( 830) 00:13:59.436 6805.662 - 6856.074: 51.9433% ( 802) 00:13:59.436 6856.074 - 6906.486: 56.5804% ( 828) 00:13:59.436 6906.486 - 6956.898: 61.0999% ( 807) 00:13:59.436 6956.898 - 7007.311: 65.6362% ( 810) 00:13:59.436 7007.311 - 7057.723: 70.0381% ( 786) 00:13:59.436 7057.723 - 7108.135: 74.1823% ( 740) 00:13:59.436 7108.135 - 7158.548: 77.5594% ( 603) 00:13:59.436 7158.548 - 7208.960: 80.0459% ( 444) 00:13:59.436 7208.960 - 7259.372: 81.8380% ( 320) 00:13:59.436 7259.372 - 7309.785: 83.1765% ( 239) 00:13:59.436 7309.785 - 7360.197: 84.2910% ( 199) 00:13:59.436 7360.197 - 7410.609: 85.3271% ( 185) 00:13:59.436 7410.609 - 7461.022: 86.2399% ( 163) 00:13:59.436 7461.022 - 7511.434: 86.9568% ( 128) 00:13:59.436 7511.434 - 7561.846: 87.5056% ( 98) 00:13:59.436 7561.846 - 7612.258: 87.8976% ( 70) 00:13:59.436 7612.258 - 7662.671: 88.2504% ( 63) 00:13:59.436 7662.671 - 7713.083: 88.5921% ( 61) 00:13:59.436 7713.083 - 7763.495: 88.8721% ( 50) 00:13:59.436 7763.495 - 7813.908: 89.0849% ( 38) 00:13:59.436 7813.908 - 7864.320: 89.2361% ( 27) 00:13:59.436 7864.320 - 7914.732: 89.3873% ( 27) 00:13:59.436 7914.732 - 7965.145: 89.5161% ( 23) 00:13:59.436 7965.145 - 8015.557: 89.6561% ( 25) 00:13:59.436 8015.557 - 8065.969: 89.7849% ( 23) 00:13:59.436 8065.969 - 8116.382: 89.9026% ( 21) 00:13:59.436 8116.382 - 8166.794: 90.0258% ( 22) 00:13:59.436 8166.794 - 8217.206: 90.1602% ( 24) 00:13:59.436 8217.206 - 8267.618: 90.3506% ( 34) 00:13:59.436 8267.618 - 8318.031: 90.5186% ( 30) 00:13:59.436 8318.031 - 8368.443: 90.6922% ( 31) 00:13:59.436 8368.443 - 8418.855: 90.8994% ( 37) 00:13:59.436 8418.855 - 8469.268: 91.1738% ( 49) 00:13:59.436 8469.268 - 8519.680: 91.3698% ( 35) 00:13:59.436 8519.680 - 8570.092: 91.5603% ( 34) 00:13:59.436 8570.092 - 8620.505: 91.7395% ( 32) 00:13:59.436 8620.505 - 8670.917: 91.9243% ( 33) 00:13:59.436 8670.917 - 8721.329: 92.0923% ( 30) 00:13:59.436 8721.329 - 8771.742: 92.2603% ( 30) 00:13:59.436 8771.742 - 8822.154: 92.4395% ( 32) 00:13:59.436 8822.154 - 8872.566: 92.6299% ( 34) 00:13:59.436 8872.566 - 8922.978: 92.7643% ( 24) 00:13:59.436 8922.978 - 8973.391: 92.8987% ( 24) 00:13:59.436 8973.391 - 9023.803: 92.9940% ( 17) 00:13:59.436 9023.803 - 9074.215: 93.1620% ( 30) 00:13:59.436 9074.215 - 9124.628: 93.3468% ( 33) 00:13:59.436 9124.628 - 9175.040: 93.4644% ( 21) 00:13:59.436 9175.040 - 9225.452: 93.5484% ( 15) 00:13:59.436 9225.452 - 9275.865: 93.6492% ( 18) 00:13:59.436 9275.865 - 9326.277: 93.7724% ( 22) 00:13:59.436 9326.277 - 9376.689: 93.8508% ( 14) 00:13:59.436 9376.689 - 9427.102: 93.9180% ( 12) 00:13:59.436 9427.102 - 9477.514: 94.0076% ( 16) 00:13:59.436 9477.514 - 9527.926: 94.1084% ( 18) 00:13:59.436 9527.926 - 9578.338: 94.2036% ( 17) 00:13:59.436 9578.338 - 9628.751: 94.2876% ( 15) 00:13:59.436 9628.751 - 9679.163: 94.4164% ( 23) 00:13:59.436 9679.163 - 9729.575: 94.5397% ( 22) 00:13:59.436 9729.575 - 9779.988: 94.6405% ( 18) 00:13:59.436 9779.988 - 9830.400: 94.8029% ( 29) 00:13:59.436 9830.400 - 9880.812: 94.9541% ( 27) 00:13:59.436 9880.812 - 9931.225: 95.1165% ( 29) 00:13:59.436 9931.225 - 9981.637: 95.2285% ( 20) 00:13:59.436 9981.637 - 10032.049: 95.3573% ( 23) 00:13:59.436 10032.049 - 10082.462: 95.4917% ( 24) 00:13:59.436 10082.462 - 10132.874: 95.6149% ( 22) 00:13:59.436 10132.874 - 10183.286: 95.7773% ( 29) 00:13:59.436 10183.286 - 10233.698: 95.9173% ( 25) 00:13:59.437 10233.698 - 10284.111: 96.0685% ( 27) 00:13:59.437 10284.111 - 10334.523: 96.2366% ( 30) 00:13:59.437 10334.523 - 10384.935: 96.3766% ( 25) 00:13:59.437 10384.935 - 10435.348: 96.5446% ( 30) 00:13:59.437 10435.348 - 10485.760: 96.7126% ( 30) 00:13:59.437 10485.760 - 10536.172: 96.8694% ( 28) 00:13:59.437 10536.172 - 10586.585: 97.0150% ( 26) 00:13:59.437 10586.585 - 10636.997: 97.1214% ( 19) 00:13:59.437 10636.997 - 10687.409: 97.2166% ( 17) 00:13:59.437 10687.409 - 10737.822: 97.3006% ( 15) 00:13:59.437 10737.822 - 10788.234: 97.3790% ( 14) 00:13:59.437 10788.234 - 10838.646: 97.4854% ( 19) 00:13:59.437 10838.646 - 10889.058: 97.5694% ( 15) 00:13:59.437 10889.058 - 10939.471: 97.6647% ( 17) 00:13:59.437 10939.471 - 10989.883: 97.7767% ( 20) 00:13:59.437 10989.883 - 11040.295: 97.8719% ( 17) 00:13:59.437 11040.295 - 11090.708: 97.9503% ( 14) 00:13:59.437 11090.708 - 11141.120: 98.0455% ( 17) 00:13:59.437 11141.120 - 11191.532: 98.1351% ( 16) 00:13:59.437 11191.532 - 11241.945: 98.2135% ( 14) 00:13:59.437 11241.945 - 11292.357: 98.3031% ( 16) 00:13:59.437 11292.357 - 11342.769: 98.3815% ( 14) 00:13:59.437 11342.769 - 11393.182: 98.4655% ( 15) 00:13:59.437 11393.182 - 11443.594: 98.5439% ( 14) 00:13:59.437 11443.594 - 11494.006: 98.6167% ( 13) 00:13:59.437 11494.006 - 11544.418: 98.7007% ( 15) 00:13:59.437 11544.418 - 11594.831: 98.7847% ( 15) 00:13:59.437 11594.831 - 11645.243: 98.8575% ( 13) 00:13:59.437 11645.243 - 11695.655: 98.9415% ( 15) 00:13:59.437 11695.655 - 11746.068: 98.9975% ( 10) 00:13:59.437 11746.068 - 11796.480: 99.0535% ( 10) 00:13:59.437 11796.480 - 11846.892: 99.1151% ( 11) 00:13:59.437 11846.892 - 11897.305: 99.1767% ( 11) 00:13:59.437 11897.305 - 11947.717: 99.2272% ( 9) 00:13:59.437 11947.717 - 11998.129: 99.2552% ( 5) 00:13:59.437 11998.129 - 12048.542: 99.2776% ( 4) 00:13:59.437 12048.542 - 12098.954: 99.2832% ( 1) 00:13:59.437 17946.782 - 18047.606: 99.3056% ( 4) 00:13:59.437 18047.606 - 18148.431: 99.3336% ( 5) 00:13:59.437 18148.431 - 18249.255: 99.3616% ( 5) 00:13:59.437 18249.255 - 18350.080: 99.3952% ( 6) 00:13:59.437 18350.080 - 18450.905: 99.4232% ( 5) 00:13:59.437 18450.905 - 18551.729: 99.4512% ( 5) 00:13:59.437 18551.729 - 18652.554: 99.4792% ( 5) 00:13:59.437 18652.554 - 18753.378: 99.5072% ( 5) 00:13:59.437 18753.378 - 18854.203: 99.5408% ( 6) 00:13:59.437 18854.203 - 18955.028: 99.5688% ( 5) 00:13:59.437 18955.028 - 19055.852: 99.6024% ( 6) 00:13:59.437 19055.852 - 19156.677: 99.6360% ( 6) 00:13:59.437 19156.677 - 19257.502: 99.6416% ( 1) 00:13:59.437 22685.538 - 22786.363: 99.6752% ( 6) 00:13:59.437 22786.363 - 22887.188: 99.7032% ( 5) 00:13:59.437 22887.188 - 22988.012: 99.7312% ( 5) 00:13:59.437 22988.012 - 23088.837: 99.7648% ( 6) 00:13:59.437 23088.837 - 23189.662: 99.7928% ( 5) 00:13:59.437 23189.662 - 23290.486: 99.8208% ( 5) 00:13:59.437 23290.486 - 23391.311: 99.8544% ( 6) 00:13:59.437 23391.311 - 23492.135: 99.8824% ( 5) 00:13:59.437 23492.135 - 23592.960: 99.9104% ( 5) 00:13:59.437 23592.960 - 23693.785: 99.9440% ( 6) 00:13:59.437 23693.785 - 23794.609: 99.9776% ( 6) 00:13:59.437 23794.609 - 23895.434: 100.0000% ( 4) 00:13:59.437 00:13:59.437 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:13:59.437 ============================================================================== 00:13:59.437 Range in us Cumulative IO count 00:13:59.437 5242.880 - 5268.086: 0.0392% ( 7) 00:13:59.437 5268.086 - 5293.292: 0.0672% ( 5) 00:13:59.437 5293.292 - 5318.498: 0.0728% ( 1) 00:13:59.437 5318.498 - 5343.705: 0.0840% ( 2) 00:13:59.437 5343.705 - 5368.911: 0.0952% ( 2) 00:13:59.437 5368.911 - 5394.117: 0.1008% ( 1) 00:13:59.437 5394.117 - 5419.323: 0.1064% ( 1) 00:13:59.437 5419.323 - 5444.529: 0.1232% ( 3) 00:13:59.437 5444.529 - 5469.735: 0.1400% ( 3) 00:13:59.437 5469.735 - 5494.942: 0.1456% ( 1) 00:13:59.437 5494.942 - 5520.148: 0.1624% ( 3) 00:13:59.437 5520.148 - 5545.354: 0.1736% ( 2) 00:13:59.437 5545.354 - 5570.560: 0.1792% ( 1) 00:13:59.437 5570.560 - 5595.766: 0.1960% ( 3) 00:13:59.437 5595.766 - 5620.972: 0.2072% ( 2) 00:13:59.437 5620.972 - 5646.178: 0.2184% ( 2) 00:13:59.437 5646.178 - 5671.385: 0.2240% ( 1) 00:13:59.437 5671.385 - 5696.591: 0.2296% ( 1) 00:13:59.437 5696.591 - 5721.797: 0.2352% ( 1) 00:13:59.437 5721.797 - 5747.003: 0.2408% ( 1) 00:13:59.437 5747.003 - 5772.209: 0.2520% ( 2) 00:13:59.437 5772.209 - 5797.415: 0.2576% ( 1) 00:13:59.437 5797.415 - 5822.622: 0.2632% ( 1) 00:13:59.437 5822.622 - 5847.828: 0.2688% ( 1) 00:13:59.437 5847.828 - 5873.034: 0.2744% ( 1) 00:13:59.437 5873.034 - 5898.240: 0.2800% ( 1) 00:13:59.437 5898.240 - 5923.446: 0.2856% ( 1) 00:13:59.437 5923.446 - 5948.652: 0.2912% ( 1) 00:13:59.437 5948.652 - 5973.858: 0.2968% ( 1) 00:13:59.437 5973.858 - 5999.065: 0.3024% ( 1) 00:13:59.437 5999.065 - 6024.271: 0.3080% ( 1) 00:13:59.437 6024.271 - 6049.477: 0.3136% ( 1) 00:13:59.437 6049.477 - 6074.683: 0.3360% ( 4) 00:13:59.437 6074.683 - 6099.889: 0.3584% ( 4) 00:13:59.437 6099.889 - 6125.095: 0.4592% ( 18) 00:13:59.437 6125.095 - 6150.302: 0.6384% ( 32) 00:13:59.437 6150.302 - 6175.508: 0.9409% ( 54) 00:13:59.437 6175.508 - 6200.714: 1.5065% ( 101) 00:13:59.437 6200.714 - 6225.920: 2.5034% ( 178) 00:13:59.437 6225.920 - 6251.126: 3.6234% ( 200) 00:13:59.437 6251.126 - 6276.332: 5.1355% ( 270) 00:13:59.437 6276.332 - 6301.538: 6.6140% ( 264) 00:13:59.437 6301.538 - 6326.745: 8.3669% ( 313) 00:13:59.437 6326.745 - 6351.951: 10.1871% ( 325) 00:13:59.437 6351.951 - 6377.157: 12.1360% ( 348) 00:13:59.437 6377.157 - 6402.363: 14.1969% ( 368) 00:13:59.437 6402.363 - 6427.569: 16.3362% ( 382) 00:13:59.437 6427.569 - 6452.775: 18.2572% ( 343) 00:13:59.437 6452.775 - 6503.188: 22.3006% ( 722) 00:13:59.437 6503.188 - 6553.600: 26.0809% ( 675) 00:13:59.437 6553.600 - 6604.012: 30.2363% ( 742) 00:13:59.437 6604.012 - 6654.425: 34.4590% ( 754) 00:13:59.437 6654.425 - 6704.837: 38.8273% ( 780) 00:13:59.437 6704.837 - 6755.249: 43.3132% ( 801) 00:13:59.437 6755.249 - 6805.662: 47.7879% ( 799) 00:13:59.437 6805.662 - 6856.074: 52.0721% ( 765) 00:13:59.437 6856.074 - 6906.486: 56.3620% ( 766) 00:13:59.437 6906.486 - 6956.898: 60.7359% ( 781) 00:13:59.437 6956.898 - 7007.311: 65.0986% ( 779) 00:13:59.437 7007.311 - 7057.723: 69.3772% ( 764) 00:13:59.437 7057.723 - 7108.135: 73.2079% ( 684) 00:13:59.437 7108.135 - 7158.548: 76.3217% ( 556) 00:13:59.437 7158.548 - 7208.960: 78.6514% ( 416) 00:13:59.437 7208.960 - 7259.372: 80.4659% ( 324) 00:13:59.437 7259.372 - 7309.785: 81.9556% ( 266) 00:13:59.437 7309.785 - 7360.197: 83.2661% ( 234) 00:13:59.437 7360.197 - 7410.609: 84.3862% ( 200) 00:13:59.437 7410.609 - 7461.022: 85.3831% ( 178) 00:13:59.437 7461.022 - 7511.434: 86.3015% ( 164) 00:13:59.437 7511.434 - 7561.846: 86.9680% ( 119) 00:13:59.437 7561.846 - 7612.258: 87.5280% ( 100) 00:13:59.437 7612.258 - 7662.671: 88.0096% ( 86) 00:13:59.437 7662.671 - 7713.083: 88.4185% ( 73) 00:13:59.437 7713.083 - 7763.495: 88.7657% ( 62) 00:13:59.437 7763.495 - 7813.908: 89.0961% ( 59) 00:13:59.437 7813.908 - 7864.320: 89.4209% ( 58) 00:13:59.437 7864.320 - 7914.732: 89.6673% ( 44) 00:13:59.437 7914.732 - 7965.145: 89.8970% ( 41) 00:13:59.437 7965.145 - 8015.557: 90.1210% ( 40) 00:13:59.437 8015.557 - 8065.969: 90.3114% ( 34) 00:13:59.437 8065.969 - 8116.382: 90.4794% ( 30) 00:13:59.437 8116.382 - 8166.794: 90.6362% ( 28) 00:13:59.437 8166.794 - 8217.206: 90.8042% ( 30) 00:13:59.437 8217.206 - 8267.618: 90.9386% ( 24) 00:13:59.437 8267.618 - 8318.031: 91.0786% ( 25) 00:13:59.437 8318.031 - 8368.443: 91.1906% ( 20) 00:13:59.437 8368.443 - 8418.855: 91.3306% ( 25) 00:13:59.437 8418.855 - 8469.268: 91.4539% ( 22) 00:13:59.437 8469.268 - 8519.680: 91.5715% ( 21) 00:13:59.437 8519.680 - 8570.092: 91.7115% ( 25) 00:13:59.437 8570.092 - 8620.505: 91.8291% ( 21) 00:13:59.437 8620.505 - 8670.917: 91.9243% ( 17) 00:13:59.437 8670.917 - 8721.329: 92.0363% ( 20) 00:13:59.437 8721.329 - 8771.742: 92.1427% ( 19) 00:13:59.437 8771.742 - 8822.154: 92.2379% ( 17) 00:13:59.437 8822.154 - 8872.566: 92.3163% ( 14) 00:13:59.437 8872.566 - 8922.978: 92.4395% ( 22) 00:13:59.437 8922.978 - 8973.391: 92.5347% ( 17) 00:13:59.437 8973.391 - 9023.803: 92.6915% ( 28) 00:13:59.437 9023.803 - 9074.215: 92.8091% ( 21) 00:13:59.437 9074.215 - 9124.628: 92.9435% ( 24) 00:13:59.437 9124.628 - 9175.040: 93.0500% ( 19) 00:13:59.437 9175.040 - 9225.452: 93.1844% ( 24) 00:13:59.437 9225.452 - 9275.865: 93.3132% ( 23) 00:13:59.437 9275.865 - 9326.277: 93.4980% ( 33) 00:13:59.437 9326.277 - 9376.689: 93.6828% ( 33) 00:13:59.437 9376.689 - 9427.102: 93.8284% ( 26) 00:13:59.437 9427.102 - 9477.514: 94.0020% ( 31) 00:13:59.437 9477.514 - 9527.926: 94.1700% ( 30) 00:13:59.437 9527.926 - 9578.338: 94.3492% ( 32) 00:13:59.437 9578.338 - 9628.751: 94.5172% ( 30) 00:13:59.437 9628.751 - 9679.163: 94.6685% ( 27) 00:13:59.437 9679.163 - 9729.575: 94.8029% ( 24) 00:13:59.437 9729.575 - 9779.988: 94.9541% ( 27) 00:13:59.437 9779.988 - 9830.400: 95.0829% ( 23) 00:13:59.437 9830.400 - 9880.812: 95.2005% ( 21) 00:13:59.437 9880.812 - 9931.225: 95.3293% ( 23) 00:13:59.437 9931.225 - 9981.637: 95.4749% ( 26) 00:13:59.437 9981.637 - 10032.049: 95.6261% ( 27) 00:13:59.437 10032.049 - 10082.462: 95.8109% ( 33) 00:13:59.437 10082.462 - 10132.874: 96.0069% ( 35) 00:13:59.437 10132.874 - 10183.286: 96.1806% ( 31) 00:13:59.438 10183.286 - 10233.698: 96.3094% ( 23) 00:13:59.438 10233.698 - 10284.111: 96.4438% ( 24) 00:13:59.438 10284.111 - 10334.523: 96.5894% ( 26) 00:13:59.438 10334.523 - 10384.935: 96.7182% ( 23) 00:13:59.438 10384.935 - 10435.348: 96.8526% ( 24) 00:13:59.438 10435.348 - 10485.760: 97.0262% ( 31) 00:13:59.438 10485.760 - 10536.172: 97.1718% ( 26) 00:13:59.438 10536.172 - 10586.585: 97.2670% ( 17) 00:13:59.438 10586.585 - 10636.997: 97.3846% ( 21) 00:13:59.438 10636.997 - 10687.409: 97.4854% ( 18) 00:13:59.438 10687.409 - 10737.822: 97.5862% ( 18) 00:13:59.438 10737.822 - 10788.234: 97.6927% ( 19) 00:13:59.438 10788.234 - 10838.646: 97.7935% ( 18) 00:13:59.438 10838.646 - 10889.058: 97.8943% ( 18) 00:13:59.438 10889.058 - 10939.471: 97.9839% ( 16) 00:13:59.438 10939.471 - 10989.883: 98.0455% ( 11) 00:13:59.438 10989.883 - 11040.295: 98.1071% ( 11) 00:13:59.438 11040.295 - 11090.708: 98.1799% ( 13) 00:13:59.438 11090.708 - 11141.120: 98.2583% ( 14) 00:13:59.438 11141.120 - 11191.532: 98.3199% ( 11) 00:13:59.438 11191.532 - 11241.945: 98.3591% ( 7) 00:13:59.438 11241.945 - 11292.357: 98.4095% ( 9) 00:13:59.438 11292.357 - 11342.769: 98.4543% ( 8) 00:13:59.438 11342.769 - 11393.182: 98.5047% ( 9) 00:13:59.438 11393.182 - 11443.594: 98.5607% ( 10) 00:13:59.438 11443.594 - 11494.006: 98.6167% ( 10) 00:13:59.438 11494.006 - 11544.418: 98.6783% ( 11) 00:13:59.438 11544.418 - 11594.831: 98.7399% ( 11) 00:13:59.438 11594.831 - 11645.243: 98.8015% ( 11) 00:13:59.438 11645.243 - 11695.655: 98.8687% ( 12) 00:13:59.438 11695.655 - 11746.068: 98.9191% ( 9) 00:13:59.438 11746.068 - 11796.480: 98.9639% ( 8) 00:13:59.438 11796.480 - 11846.892: 99.0087% ( 8) 00:13:59.438 11846.892 - 11897.305: 99.0479% ( 7) 00:13:59.438 11897.305 - 11947.717: 99.0871% ( 7) 00:13:59.438 11947.717 - 11998.129: 99.1207% ( 6) 00:13:59.438 11998.129 - 12048.542: 99.1431% ( 4) 00:13:59.438 12048.542 - 12098.954: 99.1599% ( 3) 00:13:59.438 12098.954 - 12149.366: 99.1823% ( 4) 00:13:59.438 12149.366 - 12199.778: 99.1991% ( 3) 00:13:59.438 12199.778 - 12250.191: 99.2159% ( 3) 00:13:59.438 12250.191 - 12300.603: 99.2384% ( 4) 00:13:59.438 12300.603 - 12351.015: 99.2552% ( 3) 00:13:59.438 12351.015 - 12401.428: 99.2776% ( 4) 00:13:59.438 12401.428 - 12451.840: 99.2832% ( 1) 00:13:59.438 17543.483 - 17644.308: 99.2888% ( 1) 00:13:59.438 17644.308 - 17745.132: 99.3112% ( 4) 00:13:59.438 17745.132 - 17845.957: 99.3448% ( 6) 00:13:59.438 17845.957 - 17946.782: 99.3728% ( 5) 00:13:59.438 17946.782 - 18047.606: 99.4064% ( 6) 00:13:59.438 18047.606 - 18148.431: 99.4344% ( 5) 00:13:59.438 18148.431 - 18249.255: 99.4624% ( 5) 00:13:59.438 18249.255 - 18350.080: 99.4904% ( 5) 00:13:59.438 18350.080 - 18450.905: 99.5184% ( 5) 00:13:59.438 18450.905 - 18551.729: 99.5520% ( 6) 00:13:59.438 18551.729 - 18652.554: 99.5800% ( 5) 00:13:59.438 18652.554 - 18753.378: 99.6136% ( 6) 00:13:59.438 18753.378 - 18854.203: 99.6416% ( 5) 00:13:59.438 22282.240 - 22383.065: 99.6640% ( 4) 00:13:59.438 22383.065 - 22483.889: 99.6920% ( 5) 00:13:59.438 22483.889 - 22584.714: 99.7200% ( 5) 00:13:59.438 22584.714 - 22685.538: 99.7480% ( 5) 00:13:59.438 22685.538 - 22786.363: 99.7760% ( 5) 00:13:59.438 22786.363 - 22887.188: 99.8096% ( 6) 00:13:59.438 22887.188 - 22988.012: 99.8376% ( 5) 00:13:59.438 22988.012 - 23088.837: 99.8656% ( 5) 00:13:59.438 23088.837 - 23189.662: 99.8992% ( 6) 00:13:59.438 23189.662 - 23290.486: 99.9272% ( 5) 00:13:59.438 23290.486 - 23391.311: 99.9608% ( 6) 00:13:59.438 23391.311 - 23492.135: 99.9944% ( 6) 00:13:59.438 23492.135 - 23592.960: 100.0000% ( 1) 00:13:59.438 00:13:59.438 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:13:59.438 ============================================================================== 00:13:59.438 Range in us Cumulative IO count 00:13:59.438 4940.406 - 4965.612: 0.0112% ( 2) 00:13:59.438 4965.612 - 4990.818: 0.0224% ( 2) 00:13:59.438 4990.818 - 5016.025: 0.0392% ( 3) 00:13:59.438 5016.025 - 5041.231: 0.0504% ( 2) 00:13:59.438 5041.231 - 5066.437: 0.0616% ( 2) 00:13:59.438 5066.437 - 5091.643: 0.0728% ( 2) 00:13:59.438 5091.643 - 5116.849: 0.0896% ( 3) 00:13:59.438 5116.849 - 5142.055: 0.1008% ( 2) 00:13:59.438 5142.055 - 5167.262: 0.1120% ( 2) 00:13:59.438 5167.262 - 5192.468: 0.1232% ( 2) 00:13:59.438 5192.468 - 5217.674: 0.1344% ( 2) 00:13:59.438 5217.674 - 5242.880: 0.1456% ( 2) 00:13:59.438 5242.880 - 5268.086: 0.1568% ( 2) 00:13:59.438 5268.086 - 5293.292: 0.1680% ( 2) 00:13:59.438 5293.292 - 5318.498: 0.1736% ( 1) 00:13:59.438 5318.498 - 5343.705: 0.1904% ( 3) 00:13:59.438 5343.705 - 5368.911: 0.2016% ( 2) 00:13:59.438 5368.911 - 5394.117: 0.2128% ( 2) 00:13:59.438 5394.117 - 5419.323: 0.2240% ( 2) 00:13:59.438 5419.323 - 5444.529: 0.2352% ( 2) 00:13:59.438 5444.529 - 5469.735: 0.2520% ( 3) 00:13:59.438 5469.735 - 5494.942: 0.2632% ( 2) 00:13:59.438 5494.942 - 5520.148: 0.2744% ( 2) 00:13:59.438 5520.148 - 5545.354: 0.2912% ( 3) 00:13:59.438 5545.354 - 5570.560: 0.3024% ( 2) 00:13:59.438 5570.560 - 5595.766: 0.3136% ( 2) 00:13:59.438 5595.766 - 5620.972: 0.3248% ( 2) 00:13:59.438 5620.972 - 5646.178: 0.3416% ( 3) 00:13:59.438 5646.178 - 5671.385: 0.3528% ( 2) 00:13:59.438 5671.385 - 5696.591: 0.3584% ( 1) 00:13:59.438 6049.477 - 6074.683: 0.3864% ( 5) 00:13:59.438 6074.683 - 6099.889: 0.4256% ( 7) 00:13:59.438 6099.889 - 6125.095: 0.5040% ( 14) 00:13:59.438 6125.095 - 6150.302: 0.7168% ( 38) 00:13:59.438 6150.302 - 6175.508: 1.1033% ( 69) 00:13:59.438 6175.508 - 6200.714: 1.6409% ( 96) 00:13:59.438 6200.714 - 6225.920: 2.5818% ( 168) 00:13:59.438 6225.920 - 6251.126: 3.8530% ( 227) 00:13:59.438 6251.126 - 6276.332: 5.2923% ( 257) 00:13:59.438 6276.332 - 6301.538: 6.9052% ( 288) 00:13:59.438 6301.538 - 6326.745: 8.5069% ( 286) 00:13:59.438 6326.745 - 6351.951: 10.1478% ( 293) 00:13:59.438 6351.951 - 6377.157: 11.9120% ( 315) 00:13:59.438 6377.157 - 6402.363: 13.8945% ( 354) 00:13:59.438 6402.363 - 6427.569: 15.7090% ( 324) 00:13:59.438 6427.569 - 6452.775: 17.6187% ( 341) 00:13:59.438 6452.775 - 6503.188: 21.3542% ( 667) 00:13:59.438 6503.188 - 6553.600: 25.2576% ( 697) 00:13:59.438 6553.600 - 6604.012: 29.3683% ( 734) 00:13:59.438 6604.012 - 6654.425: 33.7310% ( 779) 00:13:59.438 6654.425 - 6704.837: 38.1272% ( 785) 00:13:59.438 6704.837 - 6755.249: 42.4059% ( 764) 00:13:59.438 6755.249 - 6805.662: 46.8638% ( 796) 00:13:59.438 6805.662 - 6856.074: 51.4169% ( 813) 00:13:59.438 6856.074 - 6906.486: 56.0484% ( 827) 00:13:59.438 6906.486 - 6956.898: 60.6127% ( 815) 00:13:59.438 6956.898 - 7007.311: 65.1770% ( 815) 00:13:59.438 7007.311 - 7057.723: 69.6909% ( 806) 00:13:59.438 7057.723 - 7108.135: 73.7455% ( 724) 00:13:59.438 7108.135 - 7158.548: 76.9097% ( 565) 00:13:59.438 7158.548 - 7208.960: 79.2507% ( 418) 00:13:59.438 7208.960 - 7259.372: 81.0764% ( 326) 00:13:59.438 7259.372 - 7309.785: 82.4485% ( 245) 00:13:59.438 7309.785 - 7360.197: 83.6470% ( 214) 00:13:59.438 7360.197 - 7410.609: 84.7558% ( 198) 00:13:59.438 7410.609 - 7461.022: 85.8367% ( 193) 00:13:59.438 7461.022 - 7511.434: 86.8112% ( 174) 00:13:59.438 7511.434 - 7561.846: 87.4832% ( 120) 00:13:59.438 7561.846 - 7612.258: 88.0208% ( 96) 00:13:59.438 7612.258 - 7662.671: 88.4577% ( 78) 00:13:59.438 7662.671 - 7713.083: 88.8497% ( 70) 00:13:59.438 7713.083 - 7763.495: 89.2081% ( 64) 00:13:59.438 7763.495 - 7813.908: 89.5441% ( 60) 00:13:59.438 7813.908 - 7864.320: 89.7681% ( 40) 00:13:59.438 7864.320 - 7914.732: 90.0090% ( 43) 00:13:59.438 7914.732 - 7965.145: 90.1826% ( 31) 00:13:59.438 7965.145 - 8015.557: 90.3730% ( 34) 00:13:59.438 8015.557 - 8065.969: 90.5466% ( 31) 00:13:59.438 8065.969 - 8116.382: 90.7146% ( 30) 00:13:59.438 8116.382 - 8166.794: 90.8714% ( 28) 00:13:59.438 8166.794 - 8217.206: 91.0338% ( 29) 00:13:59.438 8217.206 - 8267.618: 91.1906% ( 28) 00:13:59.438 8267.618 - 8318.031: 91.3194% ( 23) 00:13:59.438 8318.031 - 8368.443: 91.4427% ( 22) 00:13:59.438 8368.443 - 8418.855: 91.5379% ( 17) 00:13:59.438 8418.855 - 8469.268: 91.6331% ( 17) 00:13:59.438 8469.268 - 8519.680: 91.7115% ( 14) 00:13:59.438 8519.680 - 8570.092: 91.7843% ( 13) 00:13:59.438 8570.092 - 8620.505: 91.8739% ( 16) 00:13:59.438 8620.505 - 8670.917: 91.9523% ( 14) 00:13:59.438 8670.917 - 8721.329: 92.0363% ( 15) 00:13:59.438 8721.329 - 8771.742: 92.1203% ( 15) 00:13:59.438 8771.742 - 8822.154: 92.2099% ( 16) 00:13:59.438 8822.154 - 8872.566: 92.2827% ( 13) 00:13:59.438 8872.566 - 8922.978: 92.3667% ( 15) 00:13:59.438 8922.978 - 8973.391: 92.4563% ( 16) 00:13:59.438 8973.391 - 9023.803: 92.5459% ( 16) 00:13:59.438 9023.803 - 9074.215: 92.6915% ( 26) 00:13:59.438 9074.215 - 9124.628: 92.8035% ( 20) 00:13:59.438 9124.628 - 9175.040: 92.9211% ( 21) 00:13:59.438 9175.040 - 9225.452: 93.0780% ( 28) 00:13:59.438 9225.452 - 9275.865: 93.2628% ( 33) 00:13:59.438 9275.865 - 9326.277: 93.4588% ( 35) 00:13:59.438 9326.277 - 9376.689: 93.6380% ( 32) 00:13:59.438 9376.689 - 9427.102: 93.8396% ( 36) 00:13:59.438 9427.102 - 9477.514: 94.0076% ( 30) 00:13:59.438 9477.514 - 9527.926: 94.1476% ( 25) 00:13:59.438 9527.926 - 9578.338: 94.3212% ( 31) 00:13:59.438 9578.338 - 9628.751: 94.4780% ( 28) 00:13:59.438 9628.751 - 9679.163: 94.6517% ( 31) 00:13:59.438 9679.163 - 9729.575: 94.8197% ( 30) 00:13:59.438 9729.575 - 9779.988: 95.0101% ( 34) 00:13:59.438 9779.988 - 9830.400: 95.1669% ( 28) 00:13:59.438 9830.400 - 9880.812: 95.3293% ( 29) 00:13:59.439 9880.812 - 9931.225: 95.4693% ( 25) 00:13:59.439 9931.225 - 9981.637: 95.6093% ( 25) 00:13:59.439 9981.637 - 10032.049: 95.7381% ( 23) 00:13:59.439 10032.049 - 10082.462: 95.9061% ( 30) 00:13:59.439 10082.462 - 10132.874: 96.0966% ( 34) 00:13:59.439 10132.874 - 10183.286: 96.2478% ( 27) 00:13:59.439 10183.286 - 10233.698: 96.3878% ( 25) 00:13:59.439 10233.698 - 10284.111: 96.4942% ( 19) 00:13:59.439 10284.111 - 10334.523: 96.6118% ( 21) 00:13:59.439 10334.523 - 10384.935: 96.7182% ( 19) 00:13:59.439 10384.935 - 10435.348: 96.8246% ( 19) 00:13:59.439 10435.348 - 10485.760: 96.9366% ( 20) 00:13:59.439 10485.760 - 10536.172: 97.0374% ( 18) 00:13:59.439 10536.172 - 10586.585: 97.1270% ( 16) 00:13:59.439 10586.585 - 10636.997: 97.2390% ( 20) 00:13:59.439 10636.997 - 10687.409: 97.3510% ( 20) 00:13:59.439 10687.409 - 10737.822: 97.4630% ( 20) 00:13:59.439 10737.822 - 10788.234: 97.5638% ( 18) 00:13:59.439 10788.234 - 10838.646: 97.6478% ( 15) 00:13:59.439 10838.646 - 10889.058: 97.7711% ( 22) 00:13:59.439 10889.058 - 10939.471: 97.8607% ( 16) 00:13:59.439 10939.471 - 10989.883: 97.9559% ( 17) 00:13:59.439 10989.883 - 11040.295: 98.0455% ( 16) 00:13:59.439 11040.295 - 11090.708: 98.1575% ( 20) 00:13:59.439 11090.708 - 11141.120: 98.2527% ( 17) 00:13:59.439 11141.120 - 11191.532: 98.3311% ( 14) 00:13:59.439 11191.532 - 11241.945: 98.4095% ( 14) 00:13:59.439 11241.945 - 11292.357: 98.4767% ( 12) 00:13:59.439 11292.357 - 11342.769: 98.5607% ( 15) 00:13:59.439 11342.769 - 11393.182: 98.6223% ( 11) 00:13:59.439 11393.182 - 11443.594: 98.6895% ( 12) 00:13:59.439 11443.594 - 11494.006: 98.7287% ( 7) 00:13:59.439 11494.006 - 11544.418: 98.7679% ( 7) 00:13:59.439 11544.418 - 11594.831: 98.7903% ( 4) 00:13:59.439 11594.831 - 11645.243: 98.8239% ( 6) 00:13:59.439 11645.243 - 11695.655: 98.8575% ( 6) 00:13:59.439 11695.655 - 11746.068: 98.8911% ( 6) 00:13:59.439 11746.068 - 11796.480: 98.9359% ( 8) 00:13:59.439 11796.480 - 11846.892: 98.9695% ( 6) 00:13:59.439 11846.892 - 11897.305: 99.0087% ( 7) 00:13:59.439 11897.305 - 11947.717: 99.0479% ( 7) 00:13:59.439 11947.717 - 11998.129: 99.0759% ( 5) 00:13:59.439 11998.129 - 12048.542: 99.0927% ( 3) 00:13:59.439 12048.542 - 12098.954: 99.1151% ( 4) 00:13:59.439 12098.954 - 12149.366: 99.1319% ( 3) 00:13:59.439 12149.366 - 12199.778: 99.1543% ( 4) 00:13:59.439 12199.778 - 12250.191: 99.1711% ( 3) 00:13:59.439 12250.191 - 12300.603: 99.1879% ( 3) 00:13:59.439 12300.603 - 12351.015: 99.2103% ( 4) 00:13:59.439 12351.015 - 12401.428: 99.2272% ( 3) 00:13:59.439 12401.428 - 12451.840: 99.2496% ( 4) 00:13:59.439 12451.840 - 12502.252: 99.2664% ( 3) 00:13:59.439 12502.252 - 12552.665: 99.2832% ( 3) 00:13:59.439 16938.535 - 17039.360: 99.2944% ( 2) 00:13:59.439 17039.360 - 17140.185: 99.3336% ( 7) 00:13:59.439 17140.185 - 17241.009: 99.3616% ( 5) 00:13:59.439 17241.009 - 17341.834: 99.3896% ( 5) 00:13:59.439 17341.834 - 17442.658: 99.4176% ( 5) 00:13:59.439 17442.658 - 17543.483: 99.4456% ( 5) 00:13:59.439 17543.483 - 17644.308: 99.4792% ( 6) 00:13:59.439 17644.308 - 17745.132: 99.5072% ( 5) 00:13:59.439 17745.132 - 17845.957: 99.5352% ( 5) 00:13:59.439 17845.957 - 17946.782: 99.5632% ( 5) 00:13:59.439 17946.782 - 18047.606: 99.5912% ( 5) 00:13:59.439 18047.606 - 18148.431: 99.6192% ( 5) 00:13:59.439 18148.431 - 18249.255: 99.6416% ( 4) 00:13:59.439 21677.292 - 21778.117: 99.6696% ( 5) 00:13:59.439 21778.117 - 21878.942: 99.7032% ( 6) 00:13:59.439 21878.942 - 21979.766: 99.7256% ( 4) 00:13:59.439 21979.766 - 22080.591: 99.7592% ( 6) 00:13:59.439 22080.591 - 22181.415: 99.7872% ( 5) 00:13:59.439 22181.415 - 22282.240: 99.8208% ( 6) 00:13:59.439 22282.240 - 22383.065: 99.8488% ( 5) 00:13:59.439 22383.065 - 22483.889: 99.8768% ( 5) 00:13:59.439 22483.889 - 22584.714: 99.9048% ( 5) 00:13:59.439 22584.714 - 22685.538: 99.9384% ( 6) 00:13:59.439 22685.538 - 22786.363: 99.9664% ( 5) 00:13:59.439 22786.363 - 22887.188: 100.0000% ( 6) 00:13:59.439 00:13:59.439 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:13:59.439 ============================================================================== 00:13:59.439 Range in us Cumulative IO count 00:13:59.439 4713.551 - 4738.757: 0.0112% ( 2) 00:13:59.439 4738.757 - 4763.963: 0.0224% ( 2) 00:13:59.439 4763.963 - 4789.169: 0.0336% ( 2) 00:13:59.439 4789.169 - 4814.375: 0.0448% ( 2) 00:13:59.439 4814.375 - 4839.582: 0.0560% ( 2) 00:13:59.439 4839.582 - 4864.788: 0.0672% ( 2) 00:13:59.439 4864.788 - 4889.994: 0.0840% ( 3) 00:13:59.439 4889.994 - 4915.200: 0.0952% ( 2) 00:13:59.439 4915.200 - 4940.406: 0.1064% ( 2) 00:13:59.439 4940.406 - 4965.612: 0.1176% ( 2) 00:13:59.439 4965.612 - 4990.818: 0.1288% ( 2) 00:13:59.439 4990.818 - 5016.025: 0.1456% ( 3) 00:13:59.439 5016.025 - 5041.231: 0.1568% ( 2) 00:13:59.439 5041.231 - 5066.437: 0.1680% ( 2) 00:13:59.439 5066.437 - 5091.643: 0.1792% ( 2) 00:13:59.439 5091.643 - 5116.849: 0.1904% ( 2) 00:13:59.439 5116.849 - 5142.055: 0.2072% ( 3) 00:13:59.439 5142.055 - 5167.262: 0.2184% ( 2) 00:13:59.439 5167.262 - 5192.468: 0.2296% ( 2) 00:13:59.439 5192.468 - 5217.674: 0.2408% ( 2) 00:13:59.439 5217.674 - 5242.880: 0.2576% ( 3) 00:13:59.439 5242.880 - 5268.086: 0.2688% ( 2) 00:13:59.439 5268.086 - 5293.292: 0.2800% ( 2) 00:13:59.439 5293.292 - 5318.498: 0.2912% ( 2) 00:13:59.439 5318.498 - 5343.705: 0.3080% ( 3) 00:13:59.439 5343.705 - 5368.911: 0.3192% ( 2) 00:13:59.439 5368.911 - 5394.117: 0.3304% ( 2) 00:13:59.439 5394.117 - 5419.323: 0.3416% ( 2) 00:13:59.439 5419.323 - 5444.529: 0.3584% ( 3) 00:13:59.439 6074.683 - 6099.889: 0.4312% ( 13) 00:13:59.439 6099.889 - 6125.095: 0.5264% ( 17) 00:13:59.439 6125.095 - 6150.302: 0.6720% ( 26) 00:13:59.439 6150.302 - 6175.508: 1.0697% ( 71) 00:13:59.439 6175.508 - 6200.714: 1.6353% ( 101) 00:13:59.439 6200.714 - 6225.920: 2.4474% ( 145) 00:13:59.439 6225.920 - 6251.126: 3.6738% ( 219) 00:13:59.439 6251.126 - 6276.332: 5.0403% ( 244) 00:13:59.439 6276.332 - 6301.538: 6.6084% ( 280) 00:13:59.439 6301.538 - 6326.745: 8.2885% ( 300) 00:13:59.439 6326.745 - 6351.951: 10.0134% ( 308) 00:13:59.439 6351.951 - 6377.157: 11.8168% ( 322) 00:13:59.439 6377.157 - 6402.363: 13.7097% ( 338) 00:13:59.439 6402.363 - 6427.569: 15.6082% ( 339) 00:13:59.439 6427.569 - 6452.775: 17.6019% ( 356) 00:13:59.439 6452.775 - 6503.188: 21.1750% ( 638) 00:13:59.439 6503.188 - 6553.600: 24.9552% ( 675) 00:13:59.439 6553.600 - 6604.012: 29.0883% ( 738) 00:13:59.439 6604.012 - 6654.425: 33.3725% ( 765) 00:13:59.439 6654.425 - 6704.837: 37.9928% ( 825) 00:13:59.439 6704.837 - 6755.249: 42.5291% ( 810) 00:13:59.439 6755.249 - 6805.662: 47.0542% ( 808) 00:13:59.439 6805.662 - 6856.074: 51.6465% ( 820) 00:13:59.439 6856.074 - 6906.486: 56.1212% ( 799) 00:13:59.439 6906.486 - 6956.898: 60.7023% ( 818) 00:13:59.439 6956.898 - 7007.311: 65.3114% ( 823) 00:13:59.439 7007.311 - 7057.723: 69.7581% ( 794) 00:13:59.439 7057.723 - 7108.135: 73.9863% ( 755) 00:13:59.439 7108.135 - 7158.548: 77.2289% ( 579) 00:13:59.439 7158.548 - 7208.960: 79.7435% ( 449) 00:13:59.439 7208.960 - 7259.372: 81.5188% ( 317) 00:13:59.439 7259.372 - 7309.785: 83.0197% ( 268) 00:13:59.439 7309.785 - 7360.197: 84.2070% ( 212) 00:13:59.439 7360.197 - 7410.609: 85.4111% ( 215) 00:13:59.439 7410.609 - 7461.022: 86.4247% ( 181) 00:13:59.439 7461.022 - 7511.434: 87.2872% ( 154) 00:13:59.439 7511.434 - 7561.846: 87.9704% ( 122) 00:13:59.439 7561.846 - 7612.258: 88.4465% ( 85) 00:13:59.439 7612.258 - 7662.671: 88.8049% ( 64) 00:13:59.439 7662.671 - 7713.083: 89.2137% ( 73) 00:13:59.439 7713.083 - 7763.495: 89.5553% ( 61) 00:13:59.439 7763.495 - 7813.908: 89.8409% ( 51) 00:13:59.439 7813.908 - 7864.320: 90.0930% ( 45) 00:13:59.439 7864.320 - 7914.732: 90.3170% ( 40) 00:13:59.439 7914.732 - 7965.145: 90.5410% ( 40) 00:13:59.439 7965.145 - 8015.557: 90.6866% ( 26) 00:13:59.440 8015.557 - 8065.969: 90.8098% ( 22) 00:13:59.440 8065.969 - 8116.382: 90.9162% ( 19) 00:13:59.440 8116.382 - 8166.794: 91.0338% ( 21) 00:13:59.440 8166.794 - 8217.206: 91.1402% ( 19) 00:13:59.440 8217.206 - 8267.618: 91.2410% ( 18) 00:13:59.440 8267.618 - 8318.031: 91.3250% ( 15) 00:13:59.440 8318.031 - 8368.443: 91.4371% ( 20) 00:13:59.440 8368.443 - 8418.855: 91.5379% ( 18) 00:13:59.440 8418.855 - 8469.268: 91.6499% ( 20) 00:13:59.440 8469.268 - 8519.680: 91.7227% ( 13) 00:13:59.440 8519.680 - 8570.092: 91.8179% ( 17) 00:13:59.440 8570.092 - 8620.505: 91.8683% ( 9) 00:13:59.440 8620.505 - 8670.917: 91.9355% ( 12) 00:13:59.440 8670.917 - 8721.329: 92.0083% ( 13) 00:13:59.440 8721.329 - 8771.742: 92.0979% ( 16) 00:13:59.440 8771.742 - 8822.154: 92.1931% ( 17) 00:13:59.440 8822.154 - 8872.566: 92.2883% ( 17) 00:13:59.440 8872.566 - 8922.978: 92.3835% ( 17) 00:13:59.440 8922.978 - 8973.391: 92.5067% ( 22) 00:13:59.440 8973.391 - 9023.803: 92.6299% ( 22) 00:13:59.440 9023.803 - 9074.215: 92.7195% ( 16) 00:13:59.440 9074.215 - 9124.628: 92.8259% ( 19) 00:13:59.440 9124.628 - 9175.040: 92.9547% ( 23) 00:13:59.440 9175.040 - 9225.452: 93.1004% ( 26) 00:13:59.440 9225.452 - 9275.865: 93.2292% ( 23) 00:13:59.440 9275.865 - 9326.277: 93.3860% ( 28) 00:13:59.440 9326.277 - 9376.689: 93.5428% ( 28) 00:13:59.440 9376.689 - 9427.102: 93.6940% ( 27) 00:13:59.440 9427.102 - 9477.514: 93.8564% ( 29) 00:13:59.440 9477.514 - 9527.926: 94.0244% ( 30) 00:13:59.440 9527.926 - 9578.338: 94.1700% ( 26) 00:13:59.440 9578.338 - 9628.751: 94.3268% ( 28) 00:13:59.440 9628.751 - 9679.163: 94.4948% ( 30) 00:13:59.440 9679.163 - 9729.575: 94.6685% ( 31) 00:13:59.440 9729.575 - 9779.988: 94.8365% ( 30) 00:13:59.440 9779.988 - 9830.400: 95.0101% ( 31) 00:13:59.440 9830.400 - 9880.812: 95.1893% ( 32) 00:13:59.440 9880.812 - 9931.225: 95.3349% ( 26) 00:13:59.440 9931.225 - 9981.637: 95.4973% ( 29) 00:13:59.440 9981.637 - 10032.049: 95.6429% ( 26) 00:13:59.440 10032.049 - 10082.462: 95.7661% ( 22) 00:13:59.440 10082.462 - 10132.874: 95.8781% ( 20) 00:13:59.440 10132.874 - 10183.286: 95.9789% ( 18) 00:13:59.440 10183.286 - 10233.698: 96.0853% ( 19) 00:13:59.440 10233.698 - 10284.111: 96.2254% ( 25) 00:13:59.440 10284.111 - 10334.523: 96.3150% ( 16) 00:13:59.440 10334.523 - 10384.935: 96.4158% ( 18) 00:13:59.440 10384.935 - 10435.348: 96.5166% ( 18) 00:13:59.440 10435.348 - 10485.760: 96.6342% ( 21) 00:13:59.440 10485.760 - 10536.172: 96.7630% ( 23) 00:13:59.440 10536.172 - 10586.585: 96.9254% ( 29) 00:13:59.440 10586.585 - 10636.997: 97.1046% ( 32) 00:13:59.440 10636.997 - 10687.409: 97.2670% ( 29) 00:13:59.440 10687.409 - 10737.822: 97.3846% ( 21) 00:13:59.440 10737.822 - 10788.234: 97.4910% ( 19) 00:13:59.440 10788.234 - 10838.646: 97.5974% ( 19) 00:13:59.440 10838.646 - 10889.058: 97.7039% ( 19) 00:13:59.440 10889.058 - 10939.471: 97.7935% ( 16) 00:13:59.440 10939.471 - 10989.883: 97.8887% ( 17) 00:13:59.440 10989.883 - 11040.295: 97.9895% ( 18) 00:13:59.440 11040.295 - 11090.708: 98.0903% ( 18) 00:13:59.440 11090.708 - 11141.120: 98.1855% ( 17) 00:13:59.440 11141.120 - 11191.532: 98.2639% ( 14) 00:13:59.440 11191.532 - 11241.945: 98.3479% ( 15) 00:13:59.440 11241.945 - 11292.357: 98.4319% ( 15) 00:13:59.440 11292.357 - 11342.769: 98.5103% ( 14) 00:13:59.440 11342.769 - 11393.182: 98.5999% ( 16) 00:13:59.440 11393.182 - 11443.594: 98.6783% ( 14) 00:13:59.440 11443.594 - 11494.006: 98.7231% ( 8) 00:13:59.440 11494.006 - 11544.418: 98.7791% ( 10) 00:13:59.440 11544.418 - 11594.831: 98.8295% ( 9) 00:13:59.440 11594.831 - 11645.243: 98.8799% ( 9) 00:13:59.440 11645.243 - 11695.655: 98.9303% ( 9) 00:13:59.440 11695.655 - 11746.068: 98.9807% ( 9) 00:13:59.440 11746.068 - 11796.480: 99.0311% ( 9) 00:13:59.440 11796.480 - 11846.892: 99.0871% ( 10) 00:13:59.440 11846.892 - 11897.305: 99.1207% ( 6) 00:13:59.440 11897.305 - 11947.717: 99.1599% ( 7) 00:13:59.440 11947.717 - 11998.129: 99.1823% ( 4) 00:13:59.440 11998.129 - 12048.542: 99.1991% ( 3) 00:13:59.440 12048.542 - 12098.954: 99.2216% ( 4) 00:13:59.440 12098.954 - 12149.366: 99.2384% ( 3) 00:13:59.440 12149.366 - 12199.778: 99.2608% ( 4) 00:13:59.440 12199.778 - 12250.191: 99.2776% ( 3) 00:13:59.440 12250.191 - 12300.603: 99.2832% ( 1) 00:13:59.440 16232.763 - 16333.588: 99.2888% ( 1) 00:13:59.440 16333.588 - 16434.412: 99.3112% ( 4) 00:13:59.440 16434.412 - 16535.237: 99.3448% ( 6) 00:13:59.440 16535.237 - 16636.062: 99.3784% ( 6) 00:13:59.440 16636.062 - 16736.886: 99.4008% ( 4) 00:13:59.440 16736.886 - 16837.711: 99.4344% ( 6) 00:13:59.440 16837.711 - 16938.535: 99.4624% ( 5) 00:13:59.440 16938.535 - 17039.360: 99.4904% ( 5) 00:13:59.440 17039.360 - 17140.185: 99.5240% ( 6) 00:13:59.440 17140.185 - 17241.009: 99.5520% ( 5) 00:13:59.440 17241.009 - 17341.834: 99.5856% ( 6) 00:13:59.440 17341.834 - 17442.658: 99.6136% ( 5) 00:13:59.440 17442.658 - 17543.483: 99.6416% ( 5) 00:13:59.440 20971.520 - 21072.345: 99.6584% ( 3) 00:13:59.440 21072.345 - 21173.169: 99.6920% ( 6) 00:13:59.440 21173.169 - 21273.994: 99.7144% ( 4) 00:13:59.440 21273.994 - 21374.818: 99.7480% ( 6) 00:13:59.440 21374.818 - 21475.643: 99.7704% ( 4) 00:13:59.440 21475.643 - 21576.468: 99.8040% ( 6) 00:13:59.440 21576.468 - 21677.292: 99.8320% ( 5) 00:13:59.440 21677.292 - 21778.117: 99.8656% ( 6) 00:13:59.440 21778.117 - 21878.942: 99.8936% ( 5) 00:13:59.440 21878.942 - 21979.766: 99.9216% ( 5) 00:13:59.440 21979.766 - 22080.591: 99.9552% ( 6) 00:13:59.440 22080.591 - 22181.415: 99.9832% ( 5) 00:13:59.440 22181.415 - 22282.240: 100.0000% ( 3) 00:13:59.440 00:13:59.440 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:13:59.440 ============================================================================== 00:13:59.440 Range in us Cumulative IO count 00:13:59.440 4159.015 - 4184.222: 0.0112% ( 2) 00:13:59.440 4184.222 - 4209.428: 0.0224% ( 2) 00:13:59.440 4209.428 - 4234.634: 0.0280% ( 1) 00:13:59.440 4234.634 - 4259.840: 0.0392% ( 2) 00:13:59.440 4259.840 - 4285.046: 0.0560% ( 3) 00:13:59.440 4285.046 - 4310.252: 0.0672% ( 2) 00:13:59.440 4310.252 - 4335.458: 0.0784% ( 2) 00:13:59.440 4335.458 - 4360.665: 0.0896% ( 2) 00:13:59.440 4360.665 - 4385.871: 0.1008% ( 2) 00:13:59.440 4385.871 - 4411.077: 0.1120% ( 2) 00:13:59.440 4411.077 - 4436.283: 0.1232% ( 2) 00:13:59.440 4436.283 - 4461.489: 0.1344% ( 2) 00:13:59.440 4461.489 - 4486.695: 0.1512% ( 3) 00:13:59.440 4486.695 - 4511.902: 0.1624% ( 2) 00:13:59.440 4511.902 - 4537.108: 0.1736% ( 2) 00:13:59.440 4537.108 - 4562.314: 0.1848% ( 2) 00:13:59.440 4562.314 - 4587.520: 0.1904% ( 1) 00:13:59.440 4587.520 - 4612.726: 0.2072% ( 3) 00:13:59.440 4612.726 - 4637.932: 0.2184% ( 2) 00:13:59.440 4637.932 - 4663.138: 0.2296% ( 2) 00:13:59.440 4663.138 - 4688.345: 0.2408% ( 2) 00:13:59.440 4688.345 - 4713.551: 0.2520% ( 2) 00:13:59.440 4713.551 - 4738.757: 0.2632% ( 2) 00:13:59.440 4738.757 - 4763.963: 0.2744% ( 2) 00:13:59.440 4763.963 - 4789.169: 0.2912% ( 3) 00:13:59.440 4789.169 - 4814.375: 0.3024% ( 2) 00:13:59.440 4814.375 - 4839.582: 0.3136% ( 2) 00:13:59.440 4839.582 - 4864.788: 0.3248% ( 2) 00:13:59.440 4864.788 - 4889.994: 0.3360% ( 2) 00:13:59.440 4889.994 - 4915.200: 0.3528% ( 3) 00:13:59.440 4915.200 - 4940.406: 0.3584% ( 1) 00:13:59.440 6074.683 - 6099.889: 0.3752% ( 3) 00:13:59.440 6099.889 - 6125.095: 0.4200% ( 8) 00:13:59.440 6125.095 - 6150.302: 0.5824% ( 29) 00:13:59.440 6150.302 - 6175.508: 0.9577% ( 67) 00:13:59.440 6175.508 - 6200.714: 1.5121% ( 99) 00:13:59.440 6200.714 - 6225.920: 2.3522% ( 150) 00:13:59.440 6225.920 - 6251.126: 3.3266% ( 174) 00:13:59.440 6251.126 - 6276.332: 4.7715% ( 258) 00:13:59.440 6276.332 - 6301.538: 6.3284% ( 278) 00:13:59.440 6301.538 - 6326.745: 8.2437% ( 342) 00:13:59.440 6326.745 - 6351.951: 10.1871% ( 347) 00:13:59.440 6351.951 - 6377.157: 11.9568% ( 316) 00:13:59.440 6377.157 - 6402.363: 13.7377% ( 318) 00:13:59.440 6402.363 - 6427.569: 15.6642% ( 344) 00:13:59.440 6427.569 - 6452.775: 17.6355% ( 352) 00:13:59.440 6452.775 - 6503.188: 21.2366% ( 643) 00:13:59.440 6503.188 - 6553.600: 25.1008% ( 690) 00:13:59.440 6553.600 - 6604.012: 29.1275% ( 719) 00:13:59.440 6604.012 - 6654.425: 33.2941% ( 744) 00:13:59.440 6654.425 - 6704.837: 37.9872% ( 838) 00:13:59.440 6704.837 - 6755.249: 42.5627% ( 817) 00:13:59.440 6755.249 - 6805.662: 47.0934% ( 809) 00:13:59.440 6805.662 - 6856.074: 51.6353% ( 811) 00:13:59.440 6856.074 - 6906.486: 56.2444% ( 823) 00:13:59.440 6906.486 - 6956.898: 60.9039% ( 832) 00:13:59.440 6956.898 - 7007.311: 65.5186% ( 824) 00:13:59.440 7007.311 - 7057.723: 69.9653% ( 794) 00:13:59.440 7057.723 - 7108.135: 74.1655% ( 750) 00:13:59.440 7108.135 - 7158.548: 77.5202% ( 599) 00:13:59.440 7158.548 - 7208.960: 79.9395% ( 432) 00:13:59.440 7208.960 - 7259.372: 81.7540% ( 324) 00:13:59.440 7259.372 - 7309.785: 83.1765% ( 254) 00:13:59.440 7309.785 - 7360.197: 84.4142% ( 221) 00:13:59.440 7360.197 - 7410.609: 85.5175% ( 197) 00:13:59.440 7410.609 - 7461.022: 86.5815% ( 190) 00:13:59.440 7461.022 - 7511.434: 87.3880% ( 144) 00:13:59.440 7511.434 - 7561.846: 88.0376% ( 116) 00:13:59.440 7561.846 - 7612.258: 88.5585% ( 93) 00:13:59.440 7612.258 - 7662.671: 88.9953% ( 78) 00:13:59.440 7662.671 - 7713.083: 89.3705% ( 67) 00:13:59.440 7713.083 - 7763.495: 89.6673% ( 53) 00:13:59.440 7763.495 - 7813.908: 89.9362% ( 48) 00:13:59.440 7813.908 - 7864.320: 90.1434% ( 37) 00:13:59.440 7864.320 - 7914.732: 90.3058% ( 29) 00:13:59.441 7914.732 - 7965.145: 90.4010% ( 17) 00:13:59.441 7965.145 - 8015.557: 90.5298% ( 23) 00:13:59.441 8015.557 - 8065.969: 90.6418% ( 20) 00:13:59.441 8065.969 - 8116.382: 90.7258% ( 15) 00:13:59.441 8116.382 - 8166.794: 90.8210% ( 17) 00:13:59.441 8166.794 - 8217.206: 90.9666% ( 26) 00:13:59.441 8217.206 - 8267.618: 91.1234% ( 28) 00:13:59.441 8267.618 - 8318.031: 91.2242% ( 18) 00:13:59.441 8318.031 - 8368.443: 91.2858% ( 11) 00:13:59.441 8368.443 - 8418.855: 91.3754% ( 16) 00:13:59.441 8418.855 - 8469.268: 91.5043% ( 23) 00:13:59.441 8469.268 - 8519.680: 91.6163% ( 20) 00:13:59.441 8519.680 - 8570.092: 91.7171% ( 18) 00:13:59.441 8570.092 - 8620.505: 91.8179% ( 18) 00:13:59.441 8620.505 - 8670.917: 91.9299% ( 20) 00:13:59.441 8670.917 - 8721.329: 92.0363% ( 19) 00:13:59.441 8721.329 - 8771.742: 92.1651% ( 23) 00:13:59.441 8771.742 - 8822.154: 92.2771% ( 20) 00:13:59.441 8822.154 - 8872.566: 92.3947% ( 21) 00:13:59.441 8872.566 - 8922.978: 92.5179% ( 22) 00:13:59.441 8922.978 - 8973.391: 92.6299% ( 20) 00:13:59.441 8973.391 - 9023.803: 92.7531% ( 22) 00:13:59.441 9023.803 - 9074.215: 92.8539% ( 18) 00:13:59.441 9074.215 - 9124.628: 92.9491% ( 17) 00:13:59.441 9124.628 - 9175.040: 93.0500% ( 18) 00:13:59.441 9175.040 - 9225.452: 93.1788% ( 23) 00:13:59.441 9225.452 - 9275.865: 93.2964% ( 21) 00:13:59.441 9275.865 - 9326.277: 93.5204% ( 40) 00:13:59.441 9326.277 - 9376.689: 93.6940% ( 31) 00:13:59.441 9376.689 - 9427.102: 93.8284% ( 24) 00:13:59.441 9427.102 - 9477.514: 93.9852% ( 28) 00:13:59.441 9477.514 - 9527.926: 94.1588% ( 31) 00:13:59.441 9527.926 - 9578.338: 94.2932% ( 24) 00:13:59.441 9578.338 - 9628.751: 94.4164% ( 22) 00:13:59.441 9628.751 - 9679.163: 94.5341% ( 21) 00:13:59.441 9679.163 - 9729.575: 94.6685% ( 24) 00:13:59.441 9729.575 - 9779.988: 94.7917% ( 22) 00:13:59.441 9779.988 - 9830.400: 94.9261% ( 24) 00:13:59.441 9830.400 - 9880.812: 95.0717% ( 26) 00:13:59.441 9880.812 - 9931.225: 95.2397% ( 30) 00:13:59.441 9931.225 - 9981.637: 95.3853% ( 26) 00:13:59.441 9981.637 - 10032.049: 95.5141% ( 23) 00:13:59.441 10032.049 - 10082.462: 95.6541% ( 25) 00:13:59.441 10082.462 - 10132.874: 95.7605% ( 19) 00:13:59.441 10132.874 - 10183.286: 95.8893% ( 23) 00:13:59.441 10183.286 - 10233.698: 95.9789% ( 16) 00:13:59.441 10233.698 - 10284.111: 96.0741% ( 17) 00:13:59.441 10284.111 - 10334.523: 96.1694% ( 17) 00:13:59.441 10334.523 - 10384.935: 96.3038% ( 24) 00:13:59.441 10384.935 - 10435.348: 96.4046% ( 18) 00:13:59.441 10435.348 - 10485.760: 96.5110% ( 19) 00:13:59.441 10485.760 - 10536.172: 96.6286% ( 21) 00:13:59.441 10536.172 - 10586.585: 96.7518% ( 22) 00:13:59.441 10586.585 - 10636.997: 96.8862% ( 24) 00:13:59.441 10636.997 - 10687.409: 97.0038% ( 21) 00:13:59.441 10687.409 - 10737.822: 97.1214% ( 21) 00:13:59.441 10737.822 - 10788.234: 97.2614% ( 25) 00:13:59.441 10788.234 - 10838.646: 97.3902% ( 23) 00:13:59.441 10838.646 - 10889.058: 97.5190% ( 23) 00:13:59.441 10889.058 - 10939.471: 97.6422% ( 22) 00:13:59.441 10939.471 - 10989.883: 97.8047% ( 29) 00:13:59.441 10989.883 - 11040.295: 97.9447% ( 25) 00:13:59.441 11040.295 - 11090.708: 98.0791% ( 24) 00:13:59.441 11090.708 - 11141.120: 98.2135% ( 24) 00:13:59.441 11141.120 - 11191.532: 98.3479% ( 24) 00:13:59.441 11191.532 - 11241.945: 98.4431% ( 17) 00:13:59.441 11241.945 - 11292.357: 98.5215% ( 14) 00:13:59.441 11292.357 - 11342.769: 98.6111% ( 16) 00:13:59.441 11342.769 - 11393.182: 98.6895% ( 14) 00:13:59.441 11393.182 - 11443.594: 98.7455% ( 10) 00:13:59.441 11443.594 - 11494.006: 98.8127% ( 12) 00:13:59.441 11494.006 - 11544.418: 98.8687% ( 10) 00:13:59.441 11544.418 - 11594.831: 98.9135% ( 8) 00:13:59.441 11594.831 - 11645.243: 98.9583% ( 8) 00:13:59.441 11645.243 - 11695.655: 98.9975% ( 7) 00:13:59.441 11695.655 - 11746.068: 99.0535% ( 10) 00:13:59.441 11746.068 - 11796.480: 99.0871% ( 6) 00:13:59.441 11796.480 - 11846.892: 99.1207% ( 6) 00:13:59.441 11846.892 - 11897.305: 99.1543% ( 6) 00:13:59.441 11897.305 - 11947.717: 99.1879% ( 6) 00:13:59.441 11947.717 - 11998.129: 99.2159% ( 5) 00:13:59.441 11998.129 - 12048.542: 99.2328% ( 3) 00:13:59.441 12048.542 - 12098.954: 99.2440% ( 2) 00:13:59.441 12098.954 - 12149.366: 99.2608% ( 3) 00:13:59.441 12149.366 - 12199.778: 99.2720% ( 2) 00:13:59.441 12199.778 - 12250.191: 99.2832% ( 2) 00:13:59.441 15627.815 - 15728.640: 99.2944% ( 2) 00:13:59.441 15728.640 - 15829.465: 99.3392% ( 8) 00:13:59.441 15829.465 - 15930.289: 99.3672% ( 5) 00:13:59.441 15930.289 - 16031.114: 99.3952% ( 5) 00:13:59.441 16031.114 - 16131.938: 99.4176% ( 4) 00:13:59.441 16131.938 - 16232.763: 99.4512% ( 6) 00:13:59.441 16232.763 - 16333.588: 99.4792% ( 5) 00:13:59.441 16333.588 - 16434.412: 99.5072% ( 5) 00:13:59.441 16434.412 - 16535.237: 99.5408% ( 6) 00:13:59.441 16535.237 - 16636.062: 99.5688% ( 5) 00:13:59.441 16636.062 - 16736.886: 99.6024% ( 6) 00:13:59.441 16736.886 - 16837.711: 99.6304% ( 5) 00:13:59.441 16837.711 - 16938.535: 99.6416% ( 2) 00:13:59.441 20265.748 - 20366.572: 99.6528% ( 2) 00:13:59.441 20366.572 - 20467.397: 99.6808% ( 5) 00:13:59.441 20467.397 - 20568.222: 99.7144% ( 6) 00:13:59.441 20568.222 - 20669.046: 99.7424% ( 5) 00:13:59.441 20669.046 - 20769.871: 99.7704% ( 5) 00:13:59.441 20769.871 - 20870.695: 99.7984% ( 5) 00:13:59.441 20870.695 - 20971.520: 99.8320% ( 6) 00:13:59.441 20971.520 - 21072.345: 99.8600% ( 5) 00:13:59.441 21072.345 - 21173.169: 99.8936% ( 6) 00:13:59.441 21173.169 - 21273.994: 99.9216% ( 5) 00:13:59.441 21273.994 - 21374.818: 99.9496% ( 5) 00:13:59.441 21374.818 - 21475.643: 99.9832% ( 6) 00:13:59.441 21475.643 - 21576.468: 100.0000% ( 3) 00:13:59.441 00:13:59.441 12:44:58 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:14:00.379 Initializing NVMe Controllers 00:14:00.379 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:00.379 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:00.379 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:00.379 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:00.379 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:00.379 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:00.379 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:00.379 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:00.379 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:00.379 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:00.379 Initialization complete. Launching workers. 00:14:00.379 ======================================================== 00:14:00.379 Latency(us) 00:14:00.379 Device Information : IOPS MiB/s Average min max 00:14:00.379 PCIE (0000:00:10.0) NSID 1 from core 0: 16697.04 195.67 7667.62 5353.05 26398.78 00:14:00.379 PCIE (0000:00:11.0) NSID 1 from core 0: 16697.04 195.67 7660.21 5094.08 25568.79 00:14:00.379 PCIE (0000:00:13.0) NSID 1 from core 0: 16697.04 195.67 7652.71 4559.76 25042.39 00:14:00.379 PCIE (0000:00:12.0) NSID 1 from core 0: 16697.04 195.67 7645.36 4479.80 23578.27 00:14:00.379 PCIE (0000:00:12.0) NSID 2 from core 0: 16697.04 195.67 7638.00 4188.83 22681.86 00:14:00.379 PCIE (0000:00:12.0) NSID 3 from core 0: 16697.04 195.67 7630.61 3899.02 22250.88 00:14:00.379 ======================================================== 00:14:00.379 Total : 100182.22 1174.01 7649.08 3899.02 26398.78 00:14:00.379 00:14:00.379 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:00.379 ================================================================================= 00:14:00.379 1.00000% : 6427.569us 00:14:00.379 10.00000% : 6856.074us 00:14:00.379 25.00000% : 7108.135us 00:14:00.379 50.00000% : 7309.785us 00:14:00.379 75.00000% : 7763.495us 00:14:00.379 90.00000% : 8620.505us 00:14:00.379 95.00000% : 9427.102us 00:14:00.379 98.00000% : 11393.182us 00:14:00.379 99.00000% : 13913.797us 00:14:00.379 99.50000% : 20568.222us 00:14:00.379 99.90000% : 26012.751us 00:14:00.379 99.99000% : 26416.049us 00:14:00.379 99.99900% : 26416.049us 00:14:00.379 99.99990% : 26416.049us 00:14:00.379 99.99999% : 26416.049us 00:14:00.379 00:14:00.379 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:00.379 ================================================================================= 00:14:00.379 1.00000% : 6604.012us 00:14:00.379 10.00000% : 6956.898us 00:14:00.379 25.00000% : 7108.135us 00:14:00.379 50.00000% : 7309.785us 00:14:00.379 75.00000% : 7612.258us 00:14:00.379 90.00000% : 8519.680us 00:14:00.379 95.00000% : 9628.751us 00:14:00.379 98.00000% : 11241.945us 00:14:00.379 99.00000% : 13913.797us 00:14:00.379 99.50000% : 20164.923us 00:14:00.379 99.90000% : 25306.978us 00:14:00.379 99.99000% : 25609.452us 00:14:00.379 99.99900% : 25609.452us 00:14:00.379 99.99990% : 25609.452us 00:14:00.379 99.99999% : 25609.452us 00:14:00.379 00:14:00.380 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:00.380 ================================================================================= 00:14:00.380 1.00000% : 6604.012us 00:14:00.380 10.00000% : 6956.898us 00:14:00.380 25.00000% : 7108.135us 00:14:00.380 50.00000% : 7309.785us 00:14:00.380 75.00000% : 7662.671us 00:14:00.380 90.00000% : 8469.268us 00:14:00.380 95.00000% : 9628.751us 00:14:00.380 98.00000% : 10889.058us 00:14:00.380 99.00000% : 14216.271us 00:14:00.380 99.50000% : 19761.625us 00:14:00.380 99.90000% : 24802.855us 00:14:00.380 99.99000% : 25105.329us 00:14:00.380 99.99900% : 25105.329us 00:14:00.380 99.99990% : 25105.329us 00:14:00.380 99.99999% : 25105.329us 00:14:00.380 00:14:00.380 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:00.380 ================================================================================= 00:14:00.380 1.00000% : 6553.600us 00:14:00.380 10.00000% : 6956.898us 00:14:00.380 25.00000% : 7108.135us 00:14:00.380 50.00000% : 7309.785us 00:14:00.380 75.00000% : 7662.671us 00:14:00.380 90.00000% : 8469.268us 00:14:00.380 95.00000% : 9779.988us 00:14:00.380 98.00000% : 11342.769us 00:14:00.380 99.00000% : 14115.446us 00:14:00.380 99.50000% : 19660.800us 00:14:00.380 99.90000% : 23492.135us 00:14:00.380 99.99000% : 23592.960us 00:14:00.380 99.99900% : 23592.960us 00:14:00.380 99.99990% : 23592.960us 00:14:00.380 99.99999% : 23592.960us 00:14:00.380 00:14:00.380 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:00.380 ================================================================================= 00:14:00.380 1.00000% : 6503.188us 00:14:00.380 10.00000% : 6956.898us 00:14:00.380 25.00000% : 7108.135us 00:14:00.380 50.00000% : 7309.785us 00:14:00.380 75.00000% : 7662.671us 00:14:00.380 90.00000% : 8469.268us 00:14:00.380 95.00000% : 9527.926us 00:14:00.380 98.00000% : 11241.945us 00:14:00.380 99.00000% : 14115.446us 00:14:00.380 99.50000% : 18955.028us 00:14:00.380 99.90000% : 22685.538us 00:14:00.380 99.99000% : 22685.538us 00:14:00.380 99.99900% : 22685.538us 00:14:00.380 99.99990% : 22685.538us 00:14:00.380 99.99999% : 22685.538us 00:14:00.380 00:14:00.380 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:00.380 ================================================================================= 00:14:00.380 1.00000% : 6553.600us 00:14:00.380 10.00000% : 6956.898us 00:14:00.380 25.00000% : 7108.135us 00:14:00.380 50.00000% : 7309.785us 00:14:00.380 75.00000% : 7662.671us 00:14:00.380 90.00000% : 8469.268us 00:14:00.380 95.00000% : 9326.277us 00:14:00.380 98.00000% : 11141.120us 00:14:00.380 99.00000% : 13510.498us 00:14:00.380 99.50000% : 18350.080us 00:14:00.380 99.90000% : 22080.591us 00:14:00.380 99.99000% : 22282.240us 00:14:00.380 99.99900% : 22282.240us 00:14:00.380 99.99990% : 22282.240us 00:14:00.380 99.99999% : 22282.240us 00:14:00.380 00:14:00.380 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:14:00.380 ============================================================================== 00:14:00.380 Range in us Cumulative IO count 00:14:00.380 5343.705 - 5368.911: 0.0060% ( 1) 00:14:00.380 5494.942 - 5520.148: 0.0180% ( 2) 00:14:00.380 5520.148 - 5545.354: 0.0539% ( 6) 00:14:00.380 5545.354 - 5570.560: 0.0958% ( 7) 00:14:00.380 5570.560 - 5595.766: 0.1796% ( 14) 00:14:00.380 5595.766 - 5620.972: 0.2095% ( 5) 00:14:00.380 5620.972 - 5646.178: 0.2335% ( 4) 00:14:00.380 5646.178 - 5671.385: 0.2455% ( 2) 00:14:00.380 5671.385 - 5696.591: 0.2514% ( 1) 00:14:00.380 5696.591 - 5721.797: 0.2574% ( 1) 00:14:00.380 5721.797 - 5747.003: 0.2634% ( 1) 00:14:00.380 5747.003 - 5772.209: 0.2694% ( 1) 00:14:00.380 5772.209 - 5797.415: 0.2754% ( 1) 00:14:00.380 5797.415 - 5822.622: 0.2933% ( 3) 00:14:00.380 5822.622 - 5847.828: 0.3053% ( 2) 00:14:00.380 5847.828 - 5873.034: 0.3173% ( 2) 00:14:00.380 5873.034 - 5898.240: 0.3293% ( 2) 00:14:00.380 5923.446 - 5948.652: 0.3352% ( 1) 00:14:00.380 5948.652 - 5973.858: 0.3412% ( 1) 00:14:00.380 5973.858 - 5999.065: 0.3532% ( 2) 00:14:00.380 5999.065 - 6024.271: 0.3592% ( 1) 00:14:00.380 6024.271 - 6049.477: 0.3712% ( 2) 00:14:00.380 6049.477 - 6074.683: 0.3831% ( 2) 00:14:00.380 6074.683 - 6099.889: 0.3891% ( 1) 00:14:00.380 6125.095 - 6150.302: 0.3951% ( 1) 00:14:00.380 6225.920 - 6251.126: 0.4310% ( 6) 00:14:00.380 6251.126 - 6276.332: 0.4909% ( 10) 00:14:00.380 6276.332 - 6301.538: 0.5268% ( 6) 00:14:00.380 6301.538 - 6326.745: 0.5867% ( 10) 00:14:00.380 6326.745 - 6351.951: 0.7603% ( 29) 00:14:00.380 6351.951 - 6377.157: 0.8621% ( 17) 00:14:00.380 6377.157 - 6402.363: 0.9638% ( 17) 00:14:00.380 6402.363 - 6427.569: 1.1554% ( 32) 00:14:00.380 6427.569 - 6452.775: 1.3829% ( 38) 00:14:00.380 6452.775 - 6503.188: 1.9576% ( 96) 00:14:00.380 6503.188 - 6553.600: 2.6580% ( 117) 00:14:00.380 6553.600 - 6604.012: 3.7596% ( 184) 00:14:00.380 6604.012 - 6654.425: 4.9330% ( 196) 00:14:00.380 6654.425 - 6704.837: 5.9088% ( 163) 00:14:00.380 6704.837 - 6755.249: 7.1240% ( 203) 00:14:00.380 6755.249 - 6805.662: 8.2256% ( 184) 00:14:00.380 6805.662 - 6856.074: 10.1652% ( 324) 00:14:00.380 6856.074 - 6906.486: 12.5838% ( 404) 00:14:00.380 6906.486 - 6956.898: 15.8824% ( 551) 00:14:00.380 6956.898 - 7007.311: 20.0910% ( 703) 00:14:00.380 7007.311 - 7057.723: 24.6169% ( 756) 00:14:00.380 7057.723 - 7108.135: 30.0287% ( 904) 00:14:00.380 7108.135 - 7158.548: 35.8716% ( 976) 00:14:00.380 7158.548 - 7208.960: 41.5948% ( 956) 00:14:00.380 7208.960 - 7259.372: 46.6475% ( 844) 00:14:00.380 7259.372 - 7309.785: 51.2392% ( 767) 00:14:00.380 7309.785 - 7360.197: 55.5077% ( 713) 00:14:00.380 7360.197 - 7410.609: 59.2253% ( 621) 00:14:00.380 7410.609 - 7461.022: 62.8412% ( 604) 00:14:00.380 7461.022 - 7511.434: 65.6190% ( 464) 00:14:00.380 7511.434 - 7561.846: 68.2292% ( 436) 00:14:00.380 7561.846 - 7612.258: 70.4622% ( 373) 00:14:00.380 7612.258 - 7662.671: 72.4617% ( 334) 00:14:00.380 7662.671 - 7713.083: 74.2996% ( 307) 00:14:00.380 7713.083 - 7763.495: 75.7603% ( 244) 00:14:00.380 7763.495 - 7813.908: 77.3228% ( 261) 00:14:00.380 7813.908 - 7864.320: 78.7117% ( 232) 00:14:00.380 7864.320 - 7914.732: 79.8791% ( 195) 00:14:00.380 7914.732 - 7965.145: 80.8010% ( 154) 00:14:00.380 7965.145 - 8015.557: 81.8127% ( 169) 00:14:00.380 8015.557 - 8065.969: 82.7706% ( 160) 00:14:00.380 8065.969 - 8116.382: 83.7943% ( 171) 00:14:00.380 8116.382 - 8166.794: 84.5426% ( 125) 00:14:00.380 8166.794 - 8217.206: 85.3149% ( 129) 00:14:00.380 8217.206 - 8267.618: 86.0752% ( 127) 00:14:00.380 8267.618 - 8318.031: 86.7936% ( 120) 00:14:00.380 8318.031 - 8368.443: 87.3922% ( 100) 00:14:00.380 8368.443 - 8418.855: 87.9909% ( 100) 00:14:00.380 8418.855 - 8469.268: 88.3860% ( 66) 00:14:00.380 8469.268 - 8519.680: 88.9787% ( 99) 00:14:00.380 8519.680 - 8570.092: 89.5654% ( 98) 00:14:00.380 8570.092 - 8620.505: 90.1341% ( 95) 00:14:00.380 8620.505 - 8670.917: 90.5711% ( 73) 00:14:00.380 8670.917 - 8721.329: 90.9662% ( 66) 00:14:00.380 8721.329 - 8771.742: 91.3314% ( 61) 00:14:00.380 8771.742 - 8822.154: 91.6547% ( 54) 00:14:00.380 8822.154 - 8872.566: 91.9600% ( 51) 00:14:00.380 8872.566 - 8922.978: 92.2474% ( 48) 00:14:00.380 8922.978 - 8973.391: 92.5706% ( 54) 00:14:00.380 8973.391 - 9023.803: 93.0017% ( 72) 00:14:00.380 9023.803 - 9074.215: 93.3728% ( 62) 00:14:00.380 9074.215 - 9124.628: 93.7261% ( 59) 00:14:00.380 9124.628 - 9175.040: 94.0074% ( 47) 00:14:00.380 9175.040 - 9225.452: 94.3487% ( 57) 00:14:00.380 9225.452 - 9275.865: 94.6300% ( 47) 00:14:00.380 9275.865 - 9326.277: 94.7977% ( 28) 00:14:00.380 9326.277 - 9376.689: 94.9713% ( 29) 00:14:00.380 9376.689 - 9427.102: 95.1628% ( 32) 00:14:00.380 9427.102 - 9477.514: 95.3544% ( 32) 00:14:00.380 9477.514 - 9527.926: 95.5041% ( 25) 00:14:00.380 9527.926 - 9578.338: 95.7016% ( 33) 00:14:00.380 9578.338 - 9628.751: 95.8453% ( 24) 00:14:00.380 9628.751 - 9679.163: 95.9531% ( 18) 00:14:00.380 9679.163 - 9729.575: 96.0848% ( 22) 00:14:00.380 9729.575 - 9779.988: 96.2225% ( 23) 00:14:00.380 9779.988 - 9830.400: 96.3063% ( 14) 00:14:00.380 9830.400 - 9880.812: 96.4021% ( 16) 00:14:00.380 9880.812 - 9931.225: 96.4859% ( 14) 00:14:00.380 9931.225 - 9981.637: 96.5278% ( 7) 00:14:00.380 9981.637 - 10032.049: 96.5757% ( 8) 00:14:00.380 10032.049 - 10082.462: 96.6176% ( 7) 00:14:00.380 10082.462 - 10132.874: 96.6655% ( 8) 00:14:00.380 10132.874 - 10183.286: 96.7074% ( 7) 00:14:00.380 10183.286 - 10233.698: 96.7553% ( 8) 00:14:00.380 10233.698 - 10284.111: 96.7972% ( 7) 00:14:00.380 10284.111 - 10334.523: 96.8211% ( 4) 00:14:00.380 10334.523 - 10384.935: 96.8810% ( 10) 00:14:00.380 10384.935 - 10435.348: 96.9349% ( 9) 00:14:00.380 10435.348 - 10485.760: 97.0127% ( 13) 00:14:00.380 10485.760 - 10536.172: 97.0486% ( 6) 00:14:00.380 10536.172 - 10586.585: 97.0965% ( 8) 00:14:00.380 10586.585 - 10636.997: 97.1384% ( 7) 00:14:00.380 10636.997 - 10687.409: 97.1803% ( 7) 00:14:00.380 10687.409 - 10737.822: 97.2222% ( 7) 00:14:00.380 10737.822 - 10788.234: 97.2581% ( 6) 00:14:00.380 10788.234 - 10838.646: 97.2881% ( 5) 00:14:00.380 10838.646 - 10889.058: 97.3659% ( 13) 00:14:00.380 10889.058 - 10939.471: 97.4617% ( 16) 00:14:00.380 10939.471 - 10989.883: 97.5814% ( 20) 00:14:00.380 10989.883 - 11040.295: 97.6772% ( 16) 00:14:00.380 11040.295 - 11090.708: 97.7191% ( 7) 00:14:00.380 11090.708 - 11141.120: 97.7790% ( 10) 00:14:00.381 11141.120 - 11191.532: 97.8388% ( 10) 00:14:00.381 11191.532 - 11241.945: 97.8688% ( 5) 00:14:00.381 11241.945 - 11292.357: 97.9227% ( 9) 00:14:00.381 11292.357 - 11342.769: 97.9765% ( 9) 00:14:00.381 11342.769 - 11393.182: 98.0184% ( 7) 00:14:00.381 11393.182 - 11443.594: 98.0723% ( 9) 00:14:00.381 11443.594 - 11494.006: 98.0843% ( 2) 00:14:00.381 11494.006 - 11544.418: 98.1382% ( 9) 00:14:00.381 11544.418 - 11594.831: 98.2220% ( 14) 00:14:00.381 11594.831 - 11645.243: 98.2639% ( 7) 00:14:00.381 11645.243 - 11695.655: 98.2818% ( 3) 00:14:00.381 11695.655 - 11746.068: 98.2998% ( 3) 00:14:00.381 11746.068 - 11796.480: 98.3118% ( 2) 00:14:00.381 11796.480 - 11846.892: 98.3238% ( 2) 00:14:00.381 11846.892 - 11897.305: 98.3297% ( 1) 00:14:00.381 11897.305 - 11947.717: 98.3357% ( 1) 00:14:00.381 11947.717 - 11998.129: 98.3417% ( 1) 00:14:00.381 11998.129 - 12048.542: 98.3657% ( 4) 00:14:00.381 12048.542 - 12098.954: 98.4255% ( 10) 00:14:00.381 12098.954 - 12149.366: 98.4674% ( 7) 00:14:00.381 12149.366 - 12199.778: 98.5213% ( 9) 00:14:00.381 12199.778 - 12250.191: 98.5752% ( 9) 00:14:00.381 12250.191 - 12300.603: 98.6051% ( 5) 00:14:00.381 12300.603 - 12351.015: 98.6291% ( 4) 00:14:00.381 12351.015 - 12401.428: 98.6590% ( 5) 00:14:00.381 12401.428 - 12451.840: 98.6650% ( 1) 00:14:00.381 12451.840 - 12502.252: 98.6710% ( 1) 00:14:00.381 12502.252 - 12552.665: 98.6770% ( 1) 00:14:00.381 12603.077 - 12653.489: 98.6889% ( 2) 00:14:00.381 12653.489 - 12703.902: 98.6949% ( 1) 00:14:00.381 12703.902 - 12754.314: 98.7069% ( 2) 00:14:00.381 12754.314 - 12804.726: 98.7129% ( 1) 00:14:00.381 12804.726 - 12855.138: 98.7189% ( 1) 00:14:00.381 12855.138 - 12905.551: 98.7249% ( 1) 00:14:00.381 12905.551 - 13006.375: 98.7368% ( 2) 00:14:00.381 13006.375 - 13107.200: 98.7548% ( 3) 00:14:00.381 13107.200 - 13208.025: 98.7668% ( 2) 00:14:00.381 13208.025 - 13308.849: 98.7967% ( 5) 00:14:00.381 13308.849 - 13409.674: 98.8266% ( 5) 00:14:00.381 13409.674 - 13510.498: 98.8685% ( 7) 00:14:00.381 13510.498 - 13611.323: 98.9104% ( 7) 00:14:00.381 13611.323 - 13712.148: 98.9404% ( 5) 00:14:00.381 13712.148 - 13812.972: 98.9943% ( 9) 00:14:00.381 13812.972 - 13913.797: 99.0242% ( 5) 00:14:00.381 13913.797 - 14014.622: 99.1080% ( 14) 00:14:00.381 14014.622 - 14115.446: 99.1918% ( 14) 00:14:00.381 14115.446 - 14216.271: 99.2337% ( 7) 00:14:00.381 19761.625 - 19862.449: 99.2636% ( 5) 00:14:00.381 19862.449 - 19963.274: 99.3954% ( 22) 00:14:00.381 19963.274 - 20064.098: 99.4073% ( 2) 00:14:00.381 20064.098 - 20164.923: 99.4133% ( 1) 00:14:00.381 20164.923 - 20265.748: 99.4373% ( 4) 00:14:00.381 20265.748 - 20366.572: 99.4612% ( 4) 00:14:00.381 20366.572 - 20467.397: 99.4971% ( 6) 00:14:00.381 20467.397 - 20568.222: 99.5271% ( 5) 00:14:00.381 20568.222 - 20669.046: 99.5750% ( 8) 00:14:00.381 20669.046 - 20769.871: 99.5809% ( 1) 00:14:00.381 20870.695 - 20971.520: 99.5989% ( 3) 00:14:00.381 20971.520 - 21072.345: 99.6169% ( 3) 00:14:00.381 24399.557 - 24500.382: 99.6288% ( 2) 00:14:00.381 24500.382 - 24601.206: 99.6348% ( 1) 00:14:00.381 24802.855 - 24903.680: 99.6468% ( 2) 00:14:00.381 24903.680 - 25004.505: 99.6707% ( 4) 00:14:00.381 25004.505 - 25105.329: 99.7007% ( 5) 00:14:00.381 25105.329 - 25206.154: 99.7306% ( 5) 00:14:00.381 25206.154 - 25306.978: 99.7605% ( 5) 00:14:00.381 25306.978 - 25407.803: 99.7905% ( 5) 00:14:00.381 25407.803 - 25508.628: 99.8204% ( 5) 00:14:00.381 25508.628 - 25609.452: 99.8443% ( 4) 00:14:00.381 25609.452 - 25710.277: 99.8683% ( 4) 00:14:00.381 25710.277 - 25811.102: 99.8982% ( 5) 00:14:00.381 25811.102 - 26012.751: 99.9581% ( 10) 00:14:00.381 26012.751 - 26214.400: 99.9641% ( 1) 00:14:00.381 26214.400 - 26416.049: 100.0000% ( 6) 00:14:00.381 00:14:00.381 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:14:00.381 ============================================================================== 00:14:00.381 Range in us Cumulative IO count 00:14:00.381 5091.643 - 5116.849: 0.0060% ( 1) 00:14:00.381 5268.086 - 5293.292: 0.0120% ( 1) 00:14:00.381 5293.292 - 5318.498: 0.0299% ( 3) 00:14:00.381 5318.498 - 5343.705: 0.0778% ( 8) 00:14:00.381 5343.705 - 5368.911: 0.1437% ( 11) 00:14:00.381 5368.911 - 5394.117: 0.1916% ( 8) 00:14:00.381 5394.117 - 5419.323: 0.2395% ( 8) 00:14:00.381 5419.323 - 5444.529: 0.2694% ( 5) 00:14:00.381 5444.529 - 5469.735: 0.2814% ( 2) 00:14:00.381 5469.735 - 5494.942: 0.2933% ( 2) 00:14:00.381 5494.942 - 5520.148: 0.3053% ( 2) 00:14:00.381 5520.148 - 5545.354: 0.3173% ( 2) 00:14:00.381 5545.354 - 5570.560: 0.3293% ( 2) 00:14:00.381 5570.560 - 5595.766: 0.3352% ( 1) 00:14:00.381 5595.766 - 5620.972: 0.3472% ( 2) 00:14:00.381 5620.972 - 5646.178: 0.3532% ( 1) 00:14:00.381 5646.178 - 5671.385: 0.3592% ( 1) 00:14:00.381 5671.385 - 5696.591: 0.3712% ( 2) 00:14:00.381 5696.591 - 5721.797: 0.3831% ( 2) 00:14:00.381 6301.538 - 6326.745: 0.3891% ( 1) 00:14:00.381 6326.745 - 6351.951: 0.3951% ( 1) 00:14:00.381 6402.363 - 6427.569: 0.4071% ( 2) 00:14:00.381 6427.569 - 6452.775: 0.4490% ( 7) 00:14:00.381 6452.775 - 6503.188: 0.5508% ( 17) 00:14:00.381 6503.188 - 6553.600: 0.7124% ( 27) 00:14:00.381 6553.600 - 6604.012: 1.0896% ( 63) 00:14:00.381 6604.012 - 6654.425: 1.9816% ( 149) 00:14:00.381 6654.425 - 6704.837: 2.9394% ( 160) 00:14:00.381 6704.837 - 6755.249: 3.9392% ( 167) 00:14:00.381 6755.249 - 6805.662: 5.5556% ( 270) 00:14:00.381 6805.662 - 6856.074: 7.6149% ( 344) 00:14:00.381 6856.074 - 6906.486: 9.6384% ( 338) 00:14:00.381 6906.486 - 6956.898: 12.7455% ( 519) 00:14:00.381 6956.898 - 7007.311: 16.2476% ( 585) 00:14:00.381 7007.311 - 7057.723: 21.1626% ( 821) 00:14:00.381 7057.723 - 7108.135: 27.0654% ( 986) 00:14:00.381 7108.135 - 7158.548: 33.9021% ( 1142) 00:14:00.381 7158.548 - 7208.960: 40.5113% ( 1104) 00:14:00.381 7208.960 - 7259.372: 46.4021% ( 984) 00:14:00.381 7259.372 - 7309.785: 52.2749% ( 981) 00:14:00.381 7309.785 - 7360.197: 57.8424% ( 930) 00:14:00.381 7360.197 - 7410.609: 63.0089% ( 863) 00:14:00.381 7410.609 - 7461.022: 66.5350% ( 589) 00:14:00.381 7461.022 - 7511.434: 69.8455% ( 553) 00:14:00.381 7511.434 - 7561.846: 72.5994% ( 460) 00:14:00.381 7561.846 - 7612.258: 75.0539% ( 410) 00:14:00.381 7612.258 - 7662.671: 76.8499% ( 300) 00:14:00.381 7662.671 - 7713.083: 77.9634% ( 186) 00:14:00.381 7713.083 - 7763.495: 79.1068% ( 191) 00:14:00.381 7763.495 - 7813.908: 79.9988% ( 149) 00:14:00.381 7813.908 - 7864.320: 80.5256% ( 88) 00:14:00.381 7864.320 - 7914.732: 81.0285% ( 84) 00:14:00.381 7914.732 - 7965.145: 81.6511% ( 104) 00:14:00.381 7965.145 - 8015.557: 82.5311% ( 147) 00:14:00.381 8015.557 - 8065.969: 83.0460% ( 86) 00:14:00.381 8065.969 - 8116.382: 83.6806% ( 106) 00:14:00.381 8116.382 - 8166.794: 84.3810% ( 117) 00:14:00.381 8166.794 - 8217.206: 85.1054% ( 121) 00:14:00.381 8217.206 - 8267.618: 85.9794% ( 146) 00:14:00.381 8267.618 - 8318.031: 86.5362% ( 93) 00:14:00.381 8318.031 - 8368.443: 87.4401% ( 151) 00:14:00.381 8368.443 - 8418.855: 88.1346% ( 116) 00:14:00.381 8418.855 - 8469.268: 89.1882% ( 176) 00:14:00.381 8469.268 - 8519.680: 90.2419% ( 176) 00:14:00.381 8519.680 - 8570.092: 90.8046% ( 94) 00:14:00.381 8570.092 - 8620.505: 91.2775% ( 79) 00:14:00.381 8620.505 - 8670.917: 91.6846% ( 68) 00:14:00.381 8670.917 - 8721.329: 92.0019% ( 53) 00:14:00.381 8721.329 - 8771.742: 92.3312% ( 55) 00:14:00.381 8771.742 - 8822.154: 92.7203% ( 65) 00:14:00.381 8822.154 - 8872.566: 92.9538% ( 39) 00:14:00.381 8872.566 - 8922.978: 93.0915% ( 23) 00:14:00.381 8922.978 - 8973.391: 93.2052% ( 19) 00:14:00.381 8973.391 - 9023.803: 93.3309% ( 21) 00:14:00.381 9023.803 - 9074.215: 93.3908% ( 10) 00:14:00.381 9074.215 - 9124.628: 93.4686% ( 13) 00:14:00.381 9124.628 - 9175.040: 93.5584% ( 15) 00:14:00.381 9175.040 - 9225.452: 93.6722% ( 19) 00:14:00.381 9225.452 - 9275.865: 93.8039% ( 22) 00:14:00.381 9275.865 - 9326.277: 94.0912% ( 48) 00:14:00.381 9326.277 - 9376.689: 94.1870% ( 16) 00:14:00.381 9376.689 - 9427.102: 94.3187% ( 22) 00:14:00.381 9427.102 - 9477.514: 94.4684% ( 25) 00:14:00.381 9477.514 - 9527.926: 94.7198% ( 42) 00:14:00.381 9527.926 - 9578.338: 94.9473% ( 38) 00:14:00.381 9578.338 - 9628.751: 95.2407% ( 49) 00:14:00.381 9628.751 - 9679.163: 95.6058% ( 61) 00:14:00.381 9679.163 - 9729.575: 95.6897% ( 14) 00:14:00.381 9729.575 - 9779.988: 95.7735% ( 14) 00:14:00.381 9779.988 - 9830.400: 95.8573% ( 14) 00:14:00.381 9830.400 - 9880.812: 95.9291% ( 12) 00:14:00.381 9880.812 - 9931.225: 96.0069% ( 13) 00:14:00.381 9931.225 - 9981.637: 96.1267% ( 20) 00:14:00.381 9981.637 - 10032.049: 96.2404% ( 19) 00:14:00.381 10032.049 - 10082.462: 96.3182% ( 13) 00:14:00.381 10082.462 - 10132.874: 96.3781% ( 10) 00:14:00.381 10132.874 - 10183.286: 96.4260% ( 8) 00:14:00.381 10183.286 - 10233.698: 96.4739% ( 8) 00:14:00.381 10233.698 - 10284.111: 96.5158% ( 7) 00:14:00.381 10284.111 - 10334.523: 96.5637% ( 8) 00:14:00.381 10334.523 - 10384.935: 96.6415% ( 13) 00:14:00.381 10384.935 - 10435.348: 96.7134% ( 12) 00:14:00.381 10435.348 - 10485.760: 96.7613% ( 8) 00:14:00.381 10485.760 - 10536.172: 96.8271% ( 11) 00:14:00.381 10536.172 - 10586.585: 96.8870% ( 10) 00:14:00.381 10586.585 - 10636.997: 96.9468% ( 10) 00:14:00.381 10636.997 - 10687.409: 97.0187% ( 12) 00:14:00.381 10687.409 - 10737.822: 97.1743% ( 26) 00:14:00.381 10737.822 - 10788.234: 97.2701% ( 16) 00:14:00.382 10788.234 - 10838.646: 97.3360% ( 11) 00:14:00.382 10838.646 - 10889.058: 97.4318% ( 16) 00:14:00.382 10889.058 - 10939.471: 97.5096% ( 13) 00:14:00.382 10939.471 - 10989.883: 97.5874% ( 13) 00:14:00.382 10989.883 - 11040.295: 97.6473% ( 10) 00:14:00.382 11040.295 - 11090.708: 97.8628% ( 36) 00:14:00.382 11090.708 - 11141.120: 97.9167% ( 9) 00:14:00.382 11141.120 - 11191.532: 97.9885% ( 12) 00:14:00.382 11191.532 - 11241.945: 98.0663% ( 13) 00:14:00.382 11241.945 - 11292.357: 98.1322% ( 11) 00:14:00.382 11292.357 - 11342.769: 98.1681% ( 6) 00:14:00.382 11342.769 - 11393.182: 98.1980% ( 5) 00:14:00.382 11393.182 - 11443.594: 98.2459% ( 8) 00:14:00.382 11443.594 - 11494.006: 98.2998% ( 9) 00:14:00.382 11494.006 - 11544.418: 98.3297% ( 5) 00:14:00.382 11544.418 - 11594.831: 98.3417% ( 2) 00:14:00.382 11594.831 - 11645.243: 98.3537% ( 2) 00:14:00.382 11645.243 - 11695.655: 98.3896% ( 6) 00:14:00.382 11695.655 - 11746.068: 98.4255% ( 6) 00:14:00.382 11746.068 - 11796.480: 98.4614% ( 6) 00:14:00.382 11796.480 - 11846.892: 98.5093% ( 8) 00:14:00.382 11846.892 - 11897.305: 98.6770% ( 28) 00:14:00.382 11897.305 - 11947.717: 98.7129% ( 6) 00:14:00.382 11947.717 - 11998.129: 98.7368% ( 4) 00:14:00.382 11998.129 - 12048.542: 98.7668% ( 5) 00:14:00.382 12048.542 - 12098.954: 98.7847% ( 3) 00:14:00.382 12098.954 - 12149.366: 98.8147% ( 5) 00:14:00.382 12149.366 - 12199.778: 98.8266% ( 2) 00:14:00.382 12199.778 - 12250.191: 98.8326% ( 1) 00:14:00.382 12250.191 - 12300.603: 98.8446% ( 2) 00:14:00.382 12300.603 - 12351.015: 98.8506% ( 1) 00:14:00.382 13510.498 - 13611.323: 98.8625% ( 2) 00:14:00.382 13611.323 - 13712.148: 98.9045% ( 7) 00:14:00.382 13712.148 - 13812.972: 98.9583% ( 9) 00:14:00.382 13812.972 - 13913.797: 99.1260% ( 28) 00:14:00.382 13913.797 - 14014.622: 99.1798% ( 9) 00:14:00.382 14014.622 - 14115.446: 99.2158% ( 6) 00:14:00.382 14115.446 - 14216.271: 99.2337% ( 3) 00:14:00.382 19156.677 - 19257.502: 99.2397% ( 1) 00:14:00.382 19257.502 - 19358.326: 99.2696% ( 5) 00:14:00.382 19358.326 - 19459.151: 99.2996% ( 5) 00:14:00.382 19459.151 - 19559.975: 99.3295% ( 5) 00:14:00.382 19559.975 - 19660.800: 99.3594% ( 5) 00:14:00.382 19660.800 - 19761.625: 99.3894% ( 5) 00:14:00.382 19761.625 - 19862.449: 99.4193% ( 5) 00:14:00.382 19862.449 - 19963.274: 99.4432% ( 4) 00:14:00.382 19963.274 - 20064.098: 99.4732% ( 5) 00:14:00.382 20064.098 - 20164.923: 99.5031% ( 5) 00:14:00.382 20164.923 - 20265.748: 99.5211% ( 3) 00:14:00.382 20265.748 - 20366.572: 99.5390% ( 3) 00:14:00.382 20366.572 - 20467.397: 99.5690% ( 5) 00:14:00.382 20467.397 - 20568.222: 99.5989% ( 5) 00:14:00.382 20568.222 - 20669.046: 99.6169% ( 3) 00:14:00.382 24197.908 - 24298.732: 99.6408% ( 4) 00:14:00.382 24298.732 - 24399.557: 99.6707% ( 5) 00:14:00.382 24399.557 - 24500.382: 99.6947% ( 4) 00:14:00.382 24500.382 - 24601.206: 99.7246% ( 5) 00:14:00.382 24601.206 - 24702.031: 99.7545% ( 5) 00:14:00.382 24702.031 - 24802.855: 99.7845% ( 5) 00:14:00.382 24802.855 - 24903.680: 99.8144% ( 5) 00:14:00.382 24903.680 - 25004.505: 99.8384% ( 4) 00:14:00.382 25004.505 - 25105.329: 99.8683% ( 5) 00:14:00.382 25105.329 - 25206.154: 99.8982% ( 5) 00:14:00.382 25206.154 - 25306.978: 99.9282% ( 5) 00:14:00.382 25306.978 - 25407.803: 99.9521% ( 4) 00:14:00.382 25407.803 - 25508.628: 99.9820% ( 5) 00:14:00.382 25508.628 - 25609.452: 100.0000% ( 3) 00:14:00.382 00:14:00.382 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:14:00.382 ============================================================================== 00:14:00.382 Range in us Cumulative IO count 00:14:00.382 4537.108 - 4562.314: 0.0060% ( 1) 00:14:00.382 4562.314 - 4587.520: 0.0120% ( 1) 00:14:00.382 4763.963 - 4789.169: 0.0180% ( 1) 00:14:00.382 4789.169 - 4814.375: 0.0599% ( 7) 00:14:00.382 4814.375 - 4839.582: 0.0898% ( 5) 00:14:00.382 4839.582 - 4864.788: 0.1317% ( 7) 00:14:00.382 4864.788 - 4889.994: 0.1856% ( 9) 00:14:00.382 4889.994 - 4915.200: 0.2514% ( 11) 00:14:00.382 4915.200 - 4940.406: 0.2634% ( 2) 00:14:00.382 4940.406 - 4965.612: 0.2754% ( 2) 00:14:00.382 4965.612 - 4990.818: 0.2874% ( 2) 00:14:00.382 4990.818 - 5016.025: 0.2933% ( 1) 00:14:00.382 5016.025 - 5041.231: 0.3053% ( 2) 00:14:00.382 5041.231 - 5066.437: 0.3173% ( 2) 00:14:00.382 5066.437 - 5091.643: 0.3293% ( 2) 00:14:00.382 5091.643 - 5116.849: 0.3412% ( 2) 00:14:00.382 5116.849 - 5142.055: 0.3532% ( 2) 00:14:00.382 5142.055 - 5167.262: 0.3772% ( 4) 00:14:00.382 5167.262 - 5192.468: 0.3831% ( 1) 00:14:00.382 6276.332 - 6301.538: 0.3891% ( 1) 00:14:00.382 6326.745 - 6351.951: 0.4011% ( 2) 00:14:00.382 6351.951 - 6377.157: 0.4071% ( 1) 00:14:00.382 6377.157 - 6402.363: 0.4310% ( 4) 00:14:00.382 6402.363 - 6427.569: 0.4729% ( 7) 00:14:00.382 6427.569 - 6452.775: 0.5328% ( 10) 00:14:00.382 6452.775 - 6503.188: 0.6585% ( 21) 00:14:00.382 6503.188 - 6553.600: 0.8561% ( 33) 00:14:00.382 6553.600 - 6604.012: 1.2871% ( 72) 00:14:00.382 6604.012 - 6654.425: 2.1193% ( 139) 00:14:00.382 6654.425 - 6704.837: 2.8257% ( 118) 00:14:00.382 6704.837 - 6755.249: 4.1068% ( 214) 00:14:00.382 6755.249 - 6805.662: 6.1123% ( 335) 00:14:00.382 6805.662 - 6856.074: 7.6748% ( 261) 00:14:00.382 6856.074 - 6906.486: 9.7282% ( 343) 00:14:00.382 6906.486 - 6956.898: 12.9370% ( 536) 00:14:00.382 6956.898 - 7007.311: 16.8283% ( 650) 00:14:00.382 7007.311 - 7057.723: 21.6355% ( 803) 00:14:00.382 7057.723 - 7108.135: 26.3590% ( 789) 00:14:00.382 7108.135 - 7158.548: 31.6212% ( 879) 00:14:00.382 7158.548 - 7208.960: 38.3860% ( 1130) 00:14:00.382 7208.960 - 7259.372: 44.7079% ( 1056) 00:14:00.382 7259.372 - 7309.785: 51.1494% ( 1076) 00:14:00.382 7309.785 - 7360.197: 56.3338% ( 866) 00:14:00.382 7360.197 - 7410.609: 61.5182% ( 866) 00:14:00.382 7410.609 - 7461.022: 65.5232% ( 669) 00:14:00.382 7461.022 - 7511.434: 69.2110% ( 616) 00:14:00.382 7511.434 - 7561.846: 72.4856% ( 547) 00:14:00.382 7561.846 - 7612.258: 74.8384% ( 393) 00:14:00.382 7612.258 - 7662.671: 76.2931% ( 243) 00:14:00.382 7662.671 - 7713.083: 77.5323% ( 207) 00:14:00.382 7713.083 - 7763.495: 78.4543% ( 154) 00:14:00.382 7763.495 - 7813.908: 79.3942% ( 157) 00:14:00.382 7813.908 - 7864.320: 79.9988% ( 101) 00:14:00.382 7864.320 - 7914.732: 80.5795% ( 97) 00:14:00.382 7914.732 - 7965.145: 81.2201% ( 107) 00:14:00.382 7965.145 - 8015.557: 82.1061% ( 148) 00:14:00.382 8015.557 - 8065.969: 82.9622% ( 143) 00:14:00.382 8065.969 - 8116.382: 83.9260% ( 161) 00:14:00.382 8116.382 - 8166.794: 84.6324% ( 118) 00:14:00.382 8166.794 - 8217.206: 85.2909% ( 110) 00:14:00.382 8217.206 - 8267.618: 86.1949% ( 151) 00:14:00.382 8267.618 - 8318.031: 87.3384% ( 191) 00:14:00.382 8318.031 - 8368.443: 88.3800% ( 174) 00:14:00.382 8368.443 - 8418.855: 89.2720% ( 149) 00:14:00.382 8418.855 - 8469.268: 90.2658% ( 166) 00:14:00.382 8469.268 - 8519.680: 90.9423% ( 113) 00:14:00.382 8519.680 - 8570.092: 91.5709% ( 105) 00:14:00.382 8570.092 - 8620.505: 91.9840% ( 69) 00:14:00.382 8620.505 - 8670.917: 92.4090% ( 71) 00:14:00.382 8670.917 - 8721.329: 92.7383% ( 55) 00:14:00.382 8721.329 - 8771.742: 93.1813% ( 74) 00:14:00.382 8771.742 - 8822.154: 93.4327% ( 42) 00:14:00.382 8822.154 - 8872.566: 93.6782% ( 41) 00:14:00.382 8872.566 - 8922.978: 93.8697% ( 32) 00:14:00.382 8922.978 - 8973.391: 94.0433% ( 29) 00:14:00.382 8973.391 - 9023.803: 94.1032% ( 10) 00:14:00.382 9023.803 - 9074.215: 94.1571% ( 9) 00:14:00.382 9074.215 - 9124.628: 94.1930% ( 6) 00:14:00.382 9124.628 - 9175.040: 94.2289% ( 6) 00:14:00.382 9175.040 - 9225.452: 94.2409% ( 2) 00:14:00.382 9225.452 - 9275.865: 94.2648% ( 4) 00:14:00.382 9275.865 - 9326.277: 94.2888% ( 4) 00:14:00.382 9326.277 - 9376.689: 94.3187% ( 5) 00:14:00.382 9376.689 - 9427.102: 94.3846% ( 11) 00:14:00.382 9427.102 - 9477.514: 94.4684% ( 14) 00:14:00.382 9477.514 - 9527.926: 94.5522% ( 14) 00:14:00.382 9527.926 - 9578.338: 94.8515% ( 50) 00:14:00.382 9578.338 - 9628.751: 95.0132% ( 27) 00:14:00.382 9628.751 - 9679.163: 95.1269% ( 19) 00:14:00.382 9679.163 - 9729.575: 95.2167% ( 15) 00:14:00.382 9729.575 - 9779.988: 95.2886% ( 12) 00:14:00.382 9779.988 - 9830.400: 95.3185% ( 5) 00:14:00.382 9830.400 - 9880.812: 95.4741% ( 26) 00:14:00.382 9880.812 - 9931.225: 95.5639% ( 15) 00:14:00.382 9931.225 - 9981.637: 95.6956% ( 22) 00:14:00.382 9981.637 - 10032.049: 95.8094% ( 19) 00:14:00.382 10032.049 - 10082.462: 95.9830% ( 29) 00:14:00.382 10082.462 - 10132.874: 96.1506% ( 28) 00:14:00.382 10132.874 - 10183.286: 96.2704% ( 20) 00:14:00.382 10183.286 - 10233.698: 96.3602% ( 15) 00:14:00.382 10233.698 - 10284.111: 96.4859% ( 21) 00:14:00.382 10284.111 - 10334.523: 96.7193% ( 39) 00:14:00.382 10334.523 - 10384.935: 96.8271% ( 18) 00:14:00.382 10384.935 - 10435.348: 96.9049% ( 13) 00:14:00.382 10435.348 - 10485.760: 96.9708% ( 11) 00:14:00.382 10485.760 - 10536.172: 97.0426% ( 12) 00:14:00.382 10536.172 - 10586.585: 97.1324% ( 15) 00:14:00.382 10586.585 - 10636.997: 97.2462% ( 19) 00:14:00.382 10636.997 - 10687.409: 97.3479% ( 17) 00:14:00.382 10687.409 - 10737.822: 97.4677% ( 20) 00:14:00.382 10737.822 - 10788.234: 97.7909% ( 54) 00:14:00.382 10788.234 - 10838.646: 97.9107% ( 20) 00:14:00.382 10838.646 - 10889.058: 98.0125% ( 17) 00:14:00.382 10889.058 - 10939.471: 98.1082% ( 16) 00:14:00.382 10939.471 - 10989.883: 98.1920% ( 14) 00:14:00.382 10989.883 - 11040.295: 98.2340% ( 7) 00:14:00.382 11040.295 - 11090.708: 98.2818% ( 8) 00:14:00.383 11090.708 - 11141.120: 98.3357% ( 9) 00:14:00.383 11141.120 - 11191.532: 98.3477% ( 2) 00:14:00.383 11191.532 - 11241.945: 98.3537% ( 1) 00:14:00.383 11241.945 - 11292.357: 98.3657% ( 2) 00:14:00.383 11292.357 - 11342.769: 98.3776% ( 2) 00:14:00.383 11342.769 - 11393.182: 98.3836% ( 1) 00:14:00.383 11393.182 - 11443.594: 98.3956% ( 2) 00:14:00.383 11443.594 - 11494.006: 98.4076% ( 2) 00:14:00.383 11494.006 - 11544.418: 98.4136% ( 1) 00:14:00.383 11544.418 - 11594.831: 98.4315% ( 3) 00:14:00.383 11594.831 - 11645.243: 98.4435% ( 2) 00:14:00.383 11645.243 - 11695.655: 98.4555% ( 2) 00:14:00.383 11695.655 - 11746.068: 98.4674% ( 2) 00:14:00.383 11746.068 - 11796.480: 98.5093% ( 7) 00:14:00.383 11796.480 - 11846.892: 98.5453% ( 6) 00:14:00.383 11846.892 - 11897.305: 98.5752% ( 5) 00:14:00.383 11897.305 - 11947.717: 98.5991% ( 4) 00:14:00.383 11947.717 - 11998.129: 98.6889% ( 15) 00:14:00.383 11998.129 - 12048.542: 98.7608% ( 12) 00:14:00.383 12048.542 - 12098.954: 98.7907% ( 5) 00:14:00.383 12098.954 - 12149.366: 98.8027% ( 2) 00:14:00.383 12149.366 - 12199.778: 98.8147% ( 2) 00:14:00.383 12199.778 - 12250.191: 98.8326% ( 3) 00:14:00.383 12250.191 - 12300.603: 98.8506% ( 3) 00:14:00.383 13812.972 - 13913.797: 98.8625% ( 2) 00:14:00.383 13913.797 - 14014.622: 98.9224% ( 10) 00:14:00.383 14014.622 - 14115.446: 98.9883% ( 11) 00:14:00.383 14115.446 - 14216.271: 99.1559% ( 28) 00:14:00.383 14216.271 - 14317.095: 99.1858% ( 5) 00:14:00.383 14317.095 - 14417.920: 99.2158% ( 5) 00:14:00.383 14417.920 - 14518.745: 99.2337% ( 3) 00:14:00.383 18753.378 - 18854.203: 99.2397% ( 1) 00:14:00.383 19055.852 - 19156.677: 99.2696% ( 5) 00:14:00.383 19156.677 - 19257.502: 99.3175% ( 8) 00:14:00.383 19257.502 - 19358.326: 99.3714% ( 9) 00:14:00.383 19358.326 - 19459.151: 99.4133% ( 7) 00:14:00.383 19459.151 - 19559.975: 99.4672% ( 9) 00:14:00.383 19559.975 - 19660.800: 99.4911% ( 4) 00:14:00.383 19660.800 - 19761.625: 99.5091% ( 3) 00:14:00.383 19761.625 - 19862.449: 99.5390% ( 5) 00:14:00.383 19862.449 - 19963.274: 99.5630% ( 4) 00:14:00.383 19963.274 - 20064.098: 99.5869% ( 4) 00:14:00.383 20064.098 - 20164.923: 99.6169% ( 5) 00:14:00.383 23189.662 - 23290.486: 99.6228% ( 1) 00:14:00.383 23290.486 - 23391.311: 99.6528% ( 5) 00:14:00.383 23391.311 - 23492.135: 99.6767% ( 4) 00:14:00.383 23895.434 - 23996.258: 99.7007% ( 4) 00:14:00.383 23996.258 - 24097.083: 99.7246% ( 4) 00:14:00.383 24097.083 - 24197.908: 99.7486% ( 4) 00:14:00.383 24197.908 - 24298.732: 99.7725% ( 4) 00:14:00.383 24298.732 - 24399.557: 99.7965% ( 4) 00:14:00.383 24399.557 - 24500.382: 99.8264% ( 5) 00:14:00.383 24500.382 - 24601.206: 99.8563% ( 5) 00:14:00.383 24601.206 - 24702.031: 99.8863% ( 5) 00:14:00.383 24702.031 - 24802.855: 99.9222% ( 6) 00:14:00.383 24802.855 - 24903.680: 99.9521% ( 5) 00:14:00.383 24903.680 - 25004.505: 99.9820% ( 5) 00:14:00.383 25004.505 - 25105.329: 100.0000% ( 3) 00:14:00.383 00:14:00.383 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:14:00.383 ============================================================================== 00:14:00.383 Range in us Cumulative IO count 00:14:00.383 4461.489 - 4486.695: 0.0060% ( 1) 00:14:00.383 4486.695 - 4511.902: 0.0359% ( 5) 00:14:00.383 4511.902 - 4537.108: 0.0838% ( 8) 00:14:00.383 4537.108 - 4562.314: 0.1257% ( 7) 00:14:00.383 4562.314 - 4587.520: 0.1856% ( 10) 00:14:00.383 4587.520 - 4612.726: 0.2395% ( 9) 00:14:00.383 4612.726 - 4637.932: 0.2694% ( 5) 00:14:00.383 4637.932 - 4663.138: 0.2814% ( 2) 00:14:00.383 4663.138 - 4688.345: 0.2933% ( 2) 00:14:00.383 4688.345 - 4713.551: 0.3053% ( 2) 00:14:00.383 4713.551 - 4738.757: 0.3173% ( 2) 00:14:00.383 4738.757 - 4763.963: 0.3293% ( 2) 00:14:00.383 4763.963 - 4789.169: 0.3412% ( 2) 00:14:00.383 4789.169 - 4814.375: 0.3532% ( 2) 00:14:00.383 4814.375 - 4839.582: 0.3652% ( 2) 00:14:00.383 4839.582 - 4864.788: 0.3772% ( 2) 00:14:00.383 4864.788 - 4889.994: 0.3831% ( 1) 00:14:00.383 6175.508 - 6200.714: 0.3891% ( 1) 00:14:00.383 6200.714 - 6225.920: 0.4131% ( 4) 00:14:00.383 6225.920 - 6251.126: 0.4729% ( 10) 00:14:00.383 6251.126 - 6276.332: 0.5508% ( 13) 00:14:00.383 6276.332 - 6301.538: 0.5687% ( 3) 00:14:00.383 6301.538 - 6326.745: 0.5867% ( 3) 00:14:00.383 6351.951 - 6377.157: 0.6106% ( 4) 00:14:00.383 6377.157 - 6402.363: 0.6226% ( 2) 00:14:00.383 6402.363 - 6427.569: 0.6585% ( 6) 00:14:00.383 6427.569 - 6452.775: 0.7004% ( 7) 00:14:00.383 6452.775 - 6503.188: 0.8202% ( 20) 00:14:00.383 6503.188 - 6553.600: 1.0896% ( 45) 00:14:00.383 6553.600 - 6604.012: 1.5086% ( 70) 00:14:00.383 6604.012 - 6654.425: 2.2450% ( 123) 00:14:00.383 6654.425 - 6704.837: 2.9394% ( 116) 00:14:00.383 6704.837 - 6755.249: 4.2265% ( 215) 00:14:00.383 6755.249 - 6805.662: 5.6274% ( 234) 00:14:00.383 6805.662 - 6856.074: 7.0163% ( 232) 00:14:00.383 6856.074 - 6906.486: 9.0098% ( 333) 00:14:00.383 6906.486 - 6956.898: 12.1169% ( 519) 00:14:00.383 6956.898 - 7007.311: 15.9962% ( 648) 00:14:00.383 7007.311 - 7057.723: 20.4682% ( 747) 00:14:00.383 7057.723 - 7108.135: 25.6944% ( 873) 00:14:00.383 7108.135 - 7158.548: 31.8367% ( 1026) 00:14:00.383 7158.548 - 7208.960: 38.7213% ( 1150) 00:14:00.383 7208.960 - 7259.372: 45.0730% ( 1061) 00:14:00.383 7259.372 - 7309.785: 50.7842% ( 954) 00:14:00.383 7309.785 - 7360.197: 56.1123% ( 890) 00:14:00.383 7360.197 - 7410.609: 60.8417% ( 790) 00:14:00.383 7410.609 - 7461.022: 65.4873% ( 776) 00:14:00.383 7461.022 - 7511.434: 69.5522% ( 679) 00:14:00.383 7511.434 - 7561.846: 72.8029% ( 543) 00:14:00.383 7561.846 - 7612.258: 74.7366% ( 323) 00:14:00.383 7612.258 - 7662.671: 76.4787% ( 291) 00:14:00.383 7662.671 - 7713.083: 77.9993% ( 254) 00:14:00.383 7713.083 - 7763.495: 79.1846% ( 198) 00:14:00.383 7763.495 - 7813.908: 80.1425% ( 160) 00:14:00.383 7813.908 - 7864.320: 80.7771% ( 106) 00:14:00.383 7864.320 - 7914.732: 81.5673% ( 132) 00:14:00.383 7914.732 - 7965.145: 82.5192% ( 159) 00:14:00.383 7965.145 - 8015.557: 83.1238% ( 101) 00:14:00.383 8015.557 - 8065.969: 83.8123% ( 115) 00:14:00.383 8065.969 - 8116.382: 84.6564% ( 141) 00:14:00.383 8116.382 - 8166.794: 85.4047% ( 125) 00:14:00.383 8166.794 - 8217.206: 86.3566% ( 159) 00:14:00.383 8217.206 - 8267.618: 87.3623% ( 168) 00:14:00.383 8267.618 - 8318.031: 88.2543% ( 149) 00:14:00.383 8318.031 - 8368.443: 89.1104% ( 143) 00:14:00.383 8368.443 - 8418.855: 89.7869% ( 113) 00:14:00.383 8418.855 - 8469.268: 90.5711% ( 131) 00:14:00.383 8469.268 - 8519.680: 90.9602% ( 65) 00:14:00.383 8519.680 - 8570.092: 91.4691% ( 85) 00:14:00.383 8570.092 - 8620.505: 91.9660% ( 83) 00:14:00.383 8620.505 - 8670.917: 92.2474% ( 47) 00:14:00.383 8670.917 - 8721.329: 92.5407% ( 49) 00:14:00.383 8721.329 - 8771.742: 92.8281% ( 48) 00:14:00.383 8771.742 - 8822.154: 93.2172% ( 65) 00:14:00.383 8822.154 - 8872.566: 93.6542% ( 73) 00:14:00.383 8872.566 - 8922.978: 93.8937% ( 40) 00:14:00.383 8922.978 - 8973.391: 93.9955% ( 17) 00:14:00.383 8973.391 - 9023.803: 94.0852% ( 15) 00:14:00.383 9023.803 - 9074.215: 94.1691% ( 14) 00:14:00.383 9074.215 - 9124.628: 94.2170% ( 8) 00:14:00.383 9124.628 - 9175.040: 94.2349% ( 3) 00:14:00.383 9175.040 - 9225.452: 94.2529% ( 3) 00:14:00.383 9225.452 - 9275.865: 94.2589% ( 1) 00:14:00.383 9326.277 - 9376.689: 94.2708% ( 2) 00:14:00.383 9376.689 - 9427.102: 94.2888% ( 3) 00:14:00.383 9427.102 - 9477.514: 94.3307% ( 7) 00:14:00.383 9477.514 - 9527.926: 94.3666% ( 6) 00:14:00.383 9527.926 - 9578.338: 94.4444% ( 13) 00:14:00.383 9578.338 - 9628.751: 94.5642% ( 20) 00:14:00.383 9628.751 - 9679.163: 94.6719% ( 18) 00:14:00.383 9679.163 - 9729.575: 94.8934% ( 37) 00:14:00.383 9729.575 - 9779.988: 95.0970% ( 34) 00:14:00.383 9779.988 - 9830.400: 95.2227% ( 21) 00:14:00.383 9830.400 - 9880.812: 95.3305% ( 18) 00:14:00.383 9880.812 - 9931.225: 95.4203% ( 15) 00:14:00.383 9931.225 - 9981.637: 95.5220% ( 17) 00:14:00.383 9981.637 - 10032.049: 95.5939% ( 12) 00:14:00.383 10032.049 - 10082.462: 95.7256% ( 22) 00:14:00.383 10082.462 - 10132.874: 95.8693% ( 24) 00:14:00.383 10132.874 - 10183.286: 95.9650% ( 16) 00:14:00.383 10183.286 - 10233.698: 96.0788% ( 19) 00:14:00.383 10233.698 - 10284.111: 96.3542% ( 46) 00:14:00.383 10284.111 - 10334.523: 96.6295% ( 46) 00:14:00.383 10334.523 - 10384.935: 96.8989% ( 45) 00:14:00.383 10384.935 - 10435.348: 97.0905% ( 32) 00:14:00.383 10435.348 - 10485.760: 97.3599% ( 45) 00:14:00.383 10485.760 - 10536.172: 97.4377% ( 13) 00:14:00.383 10536.172 - 10586.585: 97.5156% ( 13) 00:14:00.383 10586.585 - 10636.997: 97.5635% ( 8) 00:14:00.383 10636.997 - 10687.409: 97.6054% ( 7) 00:14:00.383 10687.409 - 10737.822: 97.6173% ( 2) 00:14:00.383 10737.822 - 10788.234: 97.6473% ( 5) 00:14:00.383 10788.234 - 10838.646: 97.6892% ( 7) 00:14:00.383 10838.646 - 10889.058: 97.7251% ( 6) 00:14:00.383 10889.058 - 10939.471: 97.7610% ( 6) 00:14:00.383 10939.471 - 10989.883: 97.7850% ( 4) 00:14:00.383 10989.883 - 11040.295: 97.8269% ( 7) 00:14:00.383 11040.295 - 11090.708: 97.8508% ( 4) 00:14:00.383 11090.708 - 11141.120: 97.8628% ( 2) 00:14:00.383 11141.120 - 11191.532: 97.8807% ( 3) 00:14:00.383 11191.532 - 11241.945: 97.9346% ( 9) 00:14:00.383 11241.945 - 11292.357: 97.9945% ( 10) 00:14:00.384 11292.357 - 11342.769: 98.1142% ( 20) 00:14:00.384 11342.769 - 11393.182: 98.1920% ( 13) 00:14:00.384 11393.182 - 11443.594: 98.2399% ( 8) 00:14:00.384 11443.594 - 11494.006: 98.3118% ( 12) 00:14:00.384 11494.006 - 11544.418: 98.3956% ( 14) 00:14:00.384 11544.418 - 11594.831: 98.4435% ( 8) 00:14:00.384 11594.831 - 11645.243: 98.5213% ( 13) 00:14:00.384 11645.243 - 11695.655: 98.5932% ( 12) 00:14:00.384 11695.655 - 11746.068: 98.6710% ( 13) 00:14:00.384 11746.068 - 11796.480: 98.7189% ( 8) 00:14:00.384 11796.480 - 11846.892: 98.7548% ( 6) 00:14:00.384 11846.892 - 11897.305: 98.7668% ( 2) 00:14:00.384 11897.305 - 11947.717: 98.7787% ( 2) 00:14:00.384 11947.717 - 11998.129: 98.7847% ( 1) 00:14:00.384 11998.129 - 12048.542: 98.7967% ( 2) 00:14:00.384 12048.542 - 12098.954: 98.8087% ( 2) 00:14:00.384 12098.954 - 12149.366: 98.8147% ( 1) 00:14:00.384 12149.366 - 12199.778: 98.8266% ( 2) 00:14:00.384 12199.778 - 12250.191: 98.8386% ( 2) 00:14:00.384 12250.191 - 12300.603: 98.8446% ( 1) 00:14:00.384 12300.603 - 12351.015: 98.8506% ( 1) 00:14:00.384 13510.498 - 13611.323: 98.8566% ( 1) 00:14:00.384 13611.323 - 13712.148: 98.8625% ( 1) 00:14:00.384 13812.972 - 13913.797: 98.9045% ( 7) 00:14:00.384 13913.797 - 14014.622: 98.9523% ( 8) 00:14:00.384 14014.622 - 14115.446: 99.1619% ( 35) 00:14:00.384 14115.446 - 14216.271: 99.1978% ( 6) 00:14:00.384 14216.271 - 14317.095: 99.2337% ( 6) 00:14:00.384 19055.852 - 19156.677: 99.2577% ( 4) 00:14:00.384 19156.677 - 19257.502: 99.3056% ( 8) 00:14:00.384 19257.502 - 19358.326: 99.3834% ( 13) 00:14:00.384 19358.326 - 19459.151: 99.4492% ( 11) 00:14:00.384 19459.151 - 19559.975: 99.4911% ( 7) 00:14:00.384 19559.975 - 19660.800: 99.5211% ( 5) 00:14:00.384 19660.800 - 19761.625: 99.5510% ( 5) 00:14:00.384 19761.625 - 19862.449: 99.5809% ( 5) 00:14:00.384 19862.449 - 19963.274: 99.6049% ( 4) 00:14:00.384 19963.274 - 20064.098: 99.6169% ( 2) 00:14:00.384 22786.363 - 22887.188: 99.6288% ( 2) 00:14:00.384 22887.188 - 22988.012: 99.6528% ( 4) 00:14:00.384 22988.012 - 23088.837: 99.6947% ( 7) 00:14:00.384 23088.837 - 23189.662: 99.7545% ( 10) 00:14:00.384 23189.662 - 23290.486: 99.8384% ( 14) 00:14:00.384 23290.486 - 23391.311: 99.8922% ( 9) 00:14:00.384 23391.311 - 23492.135: 99.9521% ( 10) 00:14:00.384 23492.135 - 23592.960: 100.0000% ( 8) 00:14:00.384 00:14:00.384 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:14:00.384 ============================================================================== 00:14:00.384 Range in us Cumulative IO count 00:14:00.384 4184.222 - 4209.428: 0.0120% ( 2) 00:14:00.384 4209.428 - 4234.634: 0.0539% ( 7) 00:14:00.384 4234.634 - 4259.840: 0.1197% ( 11) 00:14:00.384 4259.840 - 4285.046: 0.1616% ( 7) 00:14:00.384 4285.046 - 4310.252: 0.2155% ( 9) 00:14:00.384 4310.252 - 4335.458: 0.2694% ( 9) 00:14:00.384 4335.458 - 4360.665: 0.2874% ( 3) 00:14:00.384 4360.665 - 4385.871: 0.2993% ( 2) 00:14:00.384 4385.871 - 4411.077: 0.3113% ( 2) 00:14:00.384 4411.077 - 4436.283: 0.3233% ( 2) 00:14:00.384 4436.283 - 4461.489: 0.3352% ( 2) 00:14:00.384 4461.489 - 4486.695: 0.3472% ( 2) 00:14:00.384 4486.695 - 4511.902: 0.3532% ( 1) 00:14:00.384 4511.902 - 4537.108: 0.3652% ( 2) 00:14:00.384 4537.108 - 4562.314: 0.3772% ( 2) 00:14:00.384 4562.314 - 4587.520: 0.3831% ( 1) 00:14:00.384 5873.034 - 5898.240: 0.3891% ( 1) 00:14:00.384 6024.271 - 6049.477: 0.3951% ( 1) 00:14:00.384 6049.477 - 6074.683: 0.4071% ( 2) 00:14:00.384 6074.683 - 6099.889: 0.4191% ( 2) 00:14:00.384 6099.889 - 6125.095: 0.4430% ( 4) 00:14:00.384 6125.095 - 6150.302: 0.5089% ( 11) 00:14:00.384 6150.302 - 6175.508: 0.5627% ( 9) 00:14:00.384 6175.508 - 6200.714: 0.6166% ( 9) 00:14:00.384 6200.714 - 6225.920: 0.6585% ( 7) 00:14:00.384 6225.920 - 6251.126: 0.6705% ( 2) 00:14:00.384 6251.126 - 6276.332: 0.6765% ( 1) 00:14:00.384 6276.332 - 6301.538: 0.7004% ( 4) 00:14:00.384 6301.538 - 6326.745: 0.7124% ( 2) 00:14:00.384 6326.745 - 6351.951: 0.7304% ( 3) 00:14:00.384 6351.951 - 6377.157: 0.7423% ( 2) 00:14:00.384 6377.157 - 6402.363: 0.7842% ( 7) 00:14:00.384 6402.363 - 6427.569: 0.8261% ( 7) 00:14:00.384 6427.569 - 6452.775: 0.8740% ( 8) 00:14:00.384 6452.775 - 6503.188: 1.0117% ( 23) 00:14:00.384 6503.188 - 6553.600: 1.2751% ( 44) 00:14:00.384 6553.600 - 6604.012: 1.6284% ( 59) 00:14:00.384 6604.012 - 6654.425: 2.2689% ( 107) 00:14:00.384 6654.425 - 6704.837: 3.0532% ( 131) 00:14:00.384 6704.837 - 6755.249: 3.9811% ( 155) 00:14:00.384 6755.249 - 6805.662: 5.6214% ( 274) 00:14:00.384 6805.662 - 6856.074: 7.1061% ( 248) 00:14:00.384 6856.074 - 6906.486: 9.2792% ( 363) 00:14:00.384 6906.486 - 6956.898: 12.0450% ( 462) 00:14:00.384 6956.898 - 7007.311: 15.5292% ( 582) 00:14:00.384 7007.311 - 7057.723: 20.0611% ( 757) 00:14:00.384 7057.723 - 7108.135: 25.8441% ( 966) 00:14:00.384 7108.135 - 7158.548: 31.9804% ( 1025) 00:14:00.384 7158.548 - 7208.960: 38.6255% ( 1110) 00:14:00.384 7208.960 - 7259.372: 45.6956% ( 1181) 00:14:00.384 7259.372 - 7309.785: 51.5146% ( 972) 00:14:00.384 7309.785 - 7360.197: 57.3515% ( 975) 00:14:00.384 7360.197 - 7410.609: 62.0570% ( 786) 00:14:00.384 7410.609 - 7461.022: 66.7445% ( 783) 00:14:00.384 7461.022 - 7511.434: 70.6178% ( 647) 00:14:00.384 7511.434 - 7561.846: 72.8688% ( 376) 00:14:00.384 7561.846 - 7612.258: 74.5869% ( 287) 00:14:00.384 7612.258 - 7662.671: 76.2273% ( 274) 00:14:00.384 7662.671 - 7713.083: 77.7598% ( 256) 00:14:00.384 7713.083 - 7763.495: 79.4899% ( 289) 00:14:00.384 7763.495 - 7813.908: 80.5675% ( 180) 00:14:00.384 7813.908 - 7864.320: 81.2919% ( 121) 00:14:00.384 7864.320 - 7914.732: 82.1121% ( 137) 00:14:00.384 7914.732 - 7965.145: 82.8065% ( 116) 00:14:00.384 7965.145 - 8015.557: 83.4411% ( 106) 00:14:00.384 8015.557 - 8065.969: 84.0398% ( 100) 00:14:00.384 8065.969 - 8116.382: 84.7043% ( 111) 00:14:00.384 8116.382 - 8166.794: 85.4227% ( 120) 00:14:00.384 8166.794 - 8217.206: 86.0692% ( 108) 00:14:00.384 8217.206 - 8267.618: 86.8355% ( 128) 00:14:00.384 8267.618 - 8318.031: 87.4940% ( 110) 00:14:00.384 8318.031 - 8368.443: 88.9488% ( 243) 00:14:00.384 8368.443 - 8418.855: 89.5534% ( 101) 00:14:00.384 8418.855 - 8469.268: 90.1820% ( 105) 00:14:00.384 8469.268 - 8519.680: 90.7567% ( 96) 00:14:00.384 8519.680 - 8570.092: 91.1159% ( 60) 00:14:00.384 8570.092 - 8620.505: 91.5769% ( 77) 00:14:00.384 8620.505 - 8670.917: 91.9420% ( 61) 00:14:00.384 8670.917 - 8721.329: 92.3432% ( 67) 00:14:00.384 8721.329 - 8771.742: 92.5467% ( 34) 00:14:00.384 8771.742 - 8822.154: 92.7323% ( 31) 00:14:00.384 8822.154 - 8872.566: 92.8999% ( 28) 00:14:00.384 8872.566 - 8922.978: 93.2172% ( 53) 00:14:00.384 8922.978 - 8973.391: 93.4746% ( 43) 00:14:00.384 8973.391 - 9023.803: 93.6841% ( 35) 00:14:00.384 9023.803 - 9074.215: 93.9535% ( 45) 00:14:00.384 9074.215 - 9124.628: 94.0613% ( 18) 00:14:00.384 9124.628 - 9175.040: 94.2349% ( 29) 00:14:00.384 9175.040 - 9225.452: 94.2828% ( 8) 00:14:00.384 9225.452 - 9275.865: 94.4025% ( 20) 00:14:00.384 9275.865 - 9326.277: 94.5582% ( 26) 00:14:00.384 9326.277 - 9376.689: 94.7138% ( 26) 00:14:00.384 9376.689 - 9427.102: 94.8515% ( 23) 00:14:00.384 9427.102 - 9477.514: 94.9593% ( 18) 00:14:00.384 9477.514 - 9527.926: 95.0730% ( 19) 00:14:00.384 9527.926 - 9578.338: 95.1808% ( 18) 00:14:00.384 9578.338 - 9628.751: 95.2646% ( 14) 00:14:00.384 9628.751 - 9679.163: 95.3544% ( 15) 00:14:00.384 9679.163 - 9729.575: 95.4801% ( 21) 00:14:00.384 9729.575 - 9779.988: 95.6597% ( 30) 00:14:00.384 9779.988 - 9830.400: 95.7256% ( 11) 00:14:00.384 9830.400 - 9880.812: 95.7974% ( 12) 00:14:00.384 9880.812 - 9931.225: 95.8872% ( 15) 00:14:00.384 9931.225 - 9981.637: 95.9950% ( 18) 00:14:00.384 9981.637 - 10032.049: 96.1087% ( 19) 00:14:00.384 10032.049 - 10082.462: 96.2105% ( 17) 00:14:00.384 10082.462 - 10132.874: 96.3003% ( 15) 00:14:00.384 10132.874 - 10183.286: 96.3961% ( 16) 00:14:00.384 10183.286 - 10233.698: 96.4440% ( 8) 00:14:00.385 10233.698 - 10284.111: 96.4859% ( 7) 00:14:00.385 10284.111 - 10334.523: 96.5338% ( 8) 00:14:00.385 10334.523 - 10384.935: 96.5876% ( 9) 00:14:00.385 10384.935 - 10435.348: 96.6295% ( 7) 00:14:00.385 10435.348 - 10485.760: 96.7014% ( 12) 00:14:00.385 10485.760 - 10536.172: 96.8930% ( 32) 00:14:00.385 10536.172 - 10586.585: 96.9768% ( 14) 00:14:00.385 10586.585 - 10636.997: 97.1085% ( 22) 00:14:00.385 10636.997 - 10687.409: 97.2462% ( 23) 00:14:00.385 10687.409 - 10737.822: 97.4318% ( 31) 00:14:00.385 10737.822 - 10788.234: 97.5874% ( 26) 00:14:00.385 10788.234 - 10838.646: 97.6473% ( 10) 00:14:00.385 10838.646 - 10889.058: 97.7071% ( 10) 00:14:00.385 10889.058 - 10939.471: 97.8329% ( 21) 00:14:00.385 10939.471 - 10989.883: 97.8867% ( 9) 00:14:00.385 10989.883 - 11040.295: 97.9107% ( 4) 00:14:00.385 11040.295 - 11090.708: 97.9346% ( 4) 00:14:00.385 11090.708 - 11141.120: 97.9646% ( 5) 00:14:00.385 11141.120 - 11191.532: 97.9885% ( 4) 00:14:00.385 11191.532 - 11241.945: 98.0125% ( 4) 00:14:00.385 11241.945 - 11292.357: 98.0184% ( 1) 00:14:00.385 11292.357 - 11342.769: 98.0304% ( 2) 00:14:00.385 11342.769 - 11393.182: 98.0424% ( 2) 00:14:00.385 11393.182 - 11443.594: 98.0544% ( 2) 00:14:00.385 11443.594 - 11494.006: 98.0723% ( 3) 00:14:00.385 11494.006 - 11544.418: 98.0843% ( 2) 00:14:00.385 11544.418 - 11594.831: 98.1142% ( 5) 00:14:00.385 11594.831 - 11645.243: 98.1501% ( 6) 00:14:00.385 11645.243 - 11695.655: 98.1801% ( 5) 00:14:00.385 11695.655 - 11746.068: 98.2220% ( 7) 00:14:00.385 11746.068 - 11796.480: 98.3118% ( 15) 00:14:00.385 11796.480 - 11846.892: 98.4375% ( 21) 00:14:00.385 11846.892 - 11897.305: 98.4734% ( 6) 00:14:00.385 11897.305 - 11947.717: 98.4974% ( 4) 00:14:00.385 11947.717 - 11998.129: 98.5153% ( 3) 00:14:00.385 11998.129 - 12048.542: 98.5512% ( 6) 00:14:00.385 12048.542 - 12098.954: 98.5632% ( 2) 00:14:00.385 12098.954 - 12149.366: 98.5752% ( 2) 00:14:00.385 12149.366 - 12199.778: 98.5812% ( 1) 00:14:00.385 12199.778 - 12250.191: 98.5991% ( 3) 00:14:00.385 12250.191 - 12300.603: 98.6051% ( 1) 00:14:00.385 12300.603 - 12351.015: 98.6171% ( 2) 00:14:00.385 12351.015 - 12401.428: 98.6530% ( 6) 00:14:00.385 12401.428 - 12451.840: 98.6830% ( 5) 00:14:00.385 12451.840 - 12502.252: 98.7249% ( 7) 00:14:00.385 12502.252 - 12552.665: 98.7488% ( 4) 00:14:00.385 12552.665 - 12603.077: 98.7907% ( 7) 00:14:00.385 12603.077 - 12653.489: 98.8206% ( 5) 00:14:00.385 12653.489 - 12703.902: 98.8266% ( 1) 00:14:00.385 12703.902 - 12754.314: 98.8386% ( 2) 00:14:00.385 12754.314 - 12804.726: 98.8506% ( 2) 00:14:00.385 13812.972 - 13913.797: 98.9104% ( 10) 00:14:00.385 13913.797 - 14014.622: 98.9643% ( 9) 00:14:00.385 14014.622 - 14115.446: 99.1739% ( 35) 00:14:00.385 14115.446 - 14216.271: 99.2038% ( 5) 00:14:00.385 14216.271 - 14317.095: 99.2337% ( 5) 00:14:00.385 18551.729 - 18652.554: 99.2577% ( 4) 00:14:00.385 18652.554 - 18753.378: 99.3056% ( 8) 00:14:00.385 18753.378 - 18854.203: 99.3654% ( 10) 00:14:00.385 18854.203 - 18955.028: 99.5031% ( 23) 00:14:00.385 18955.028 - 19055.852: 99.5570% ( 9) 00:14:00.385 19055.852 - 19156.677: 99.5869% ( 5) 00:14:00.385 19156.677 - 19257.502: 99.6169% ( 5) 00:14:00.385 22483.889 - 22584.714: 99.6408% ( 4) 00:14:00.385 22584.714 - 22685.538: 100.0000% ( 60) 00:14:00.385 00:14:00.385 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:14:00.385 ============================================================================== 00:14:00.385 Range in us Cumulative IO count 00:14:00.385 3881.748 - 3906.954: 0.0060% ( 1) 00:14:00.385 3932.160 - 3957.366: 0.0659% ( 10) 00:14:00.385 3957.366 - 3982.572: 0.1317% ( 11) 00:14:00.385 3982.572 - 4007.778: 0.1976% ( 11) 00:14:00.385 4007.778 - 4032.985: 0.2455% ( 8) 00:14:00.385 4032.985 - 4058.191: 0.2694% ( 4) 00:14:00.385 4058.191 - 4083.397: 0.2814% ( 2) 00:14:00.385 4083.397 - 4108.603: 0.2933% ( 2) 00:14:00.385 4108.603 - 4133.809: 0.3053% ( 2) 00:14:00.385 4133.809 - 4159.015: 0.3173% ( 2) 00:14:00.385 4159.015 - 4184.222: 0.3233% ( 1) 00:14:00.385 4184.222 - 4209.428: 0.3352% ( 2) 00:14:00.385 4209.428 - 4234.634: 0.3412% ( 1) 00:14:00.385 4234.634 - 4259.840: 0.3532% ( 2) 00:14:00.385 4259.840 - 4285.046: 0.3652% ( 2) 00:14:00.385 4285.046 - 4310.252: 0.3772% ( 2) 00:14:00.385 4310.252 - 4335.458: 0.3831% ( 1) 00:14:00.385 5822.622 - 5847.828: 0.4011% ( 3) 00:14:00.385 5847.828 - 5873.034: 0.4490% ( 8) 00:14:00.385 5873.034 - 5898.240: 0.4969% ( 8) 00:14:00.385 5898.240 - 5923.446: 0.5568% ( 10) 00:14:00.385 5923.446 - 5948.652: 0.6226% ( 11) 00:14:00.385 5948.652 - 5973.858: 0.6585% ( 6) 00:14:00.385 5973.858 - 5999.065: 0.6705% ( 2) 00:14:00.385 5999.065 - 6024.271: 0.6825% ( 2) 00:14:00.385 6024.271 - 6049.477: 0.6944% ( 2) 00:14:00.385 6049.477 - 6074.683: 0.7064% ( 2) 00:14:00.385 6074.683 - 6099.889: 0.7184% ( 2) 00:14:00.385 6099.889 - 6125.095: 0.7244% ( 1) 00:14:00.385 6125.095 - 6150.302: 0.7364% ( 2) 00:14:00.385 6150.302 - 6175.508: 0.7483% ( 2) 00:14:00.385 6175.508 - 6200.714: 0.7603% ( 2) 00:14:00.385 6200.714 - 6225.920: 0.7663% ( 1) 00:14:00.385 6351.951 - 6377.157: 0.7842% ( 3) 00:14:00.385 6377.157 - 6402.363: 0.8142% ( 5) 00:14:00.385 6402.363 - 6427.569: 0.8441% ( 5) 00:14:00.385 6427.569 - 6452.775: 0.8800% ( 6) 00:14:00.385 6452.775 - 6503.188: 0.9519% ( 12) 00:14:00.385 6503.188 - 6553.600: 1.1434% ( 32) 00:14:00.385 6553.600 - 6604.012: 1.4907% ( 58) 00:14:00.385 6604.012 - 6654.425: 2.0534% ( 94) 00:14:00.385 6654.425 - 6704.837: 3.0532% ( 167) 00:14:00.385 6704.837 - 6755.249: 4.0589% ( 168) 00:14:00.385 6755.249 - 6805.662: 5.4298% ( 229) 00:14:00.385 6805.662 - 6856.074: 7.2917% ( 311) 00:14:00.385 6856.074 - 6906.486: 9.5426% ( 376) 00:14:00.385 6906.486 - 6956.898: 12.4880% ( 492) 00:14:00.385 6956.898 - 7007.311: 16.3374% ( 643) 00:14:00.385 7007.311 - 7057.723: 21.6236% ( 883) 00:14:00.385 7057.723 - 7108.135: 27.7059% ( 1016) 00:14:00.385 7108.135 - 7158.548: 33.4531% ( 960) 00:14:00.385 7158.548 - 7208.960: 39.4337% ( 999) 00:14:00.385 7208.960 - 7259.372: 45.7196% ( 1050) 00:14:00.385 7259.372 - 7309.785: 51.3829% ( 946) 00:14:00.385 7309.785 - 7360.197: 56.1841% ( 802) 00:14:00.385 7360.197 - 7410.609: 61.4224% ( 875) 00:14:00.385 7410.609 - 7461.022: 66.7265% ( 886) 00:14:00.385 7461.022 - 7511.434: 69.7917% ( 512) 00:14:00.385 7511.434 - 7561.846: 72.6293% ( 474) 00:14:00.385 7561.846 - 7612.258: 74.9042% ( 380) 00:14:00.385 7612.258 - 7662.671: 76.4667% ( 261) 00:14:00.385 7662.671 - 7713.083: 77.8077% ( 224) 00:14:00.385 7713.083 - 7763.495: 79.0829% ( 213) 00:14:00.385 7763.495 - 7813.908: 80.3879% ( 218) 00:14:00.385 7813.908 - 7864.320: 81.3218% ( 156) 00:14:00.385 7864.320 - 7914.732: 81.9744% ( 109) 00:14:00.385 7914.732 - 7965.145: 82.4892% ( 86) 00:14:00.385 7965.145 - 8015.557: 82.9861% ( 83) 00:14:00.385 8015.557 - 8065.969: 83.5129% ( 88) 00:14:00.385 8065.969 - 8116.382: 84.0038% ( 82) 00:14:00.385 8116.382 - 8166.794: 84.5606% ( 93) 00:14:00.385 8166.794 - 8217.206: 85.2969% ( 123) 00:14:00.385 8217.206 - 8267.618: 86.2308% ( 156) 00:14:00.385 8267.618 - 8318.031: 87.2486% ( 170) 00:14:00.385 8318.031 - 8368.443: 88.2124% ( 161) 00:14:00.385 8368.443 - 8418.855: 89.0984% ( 148) 00:14:00.385 8418.855 - 8469.268: 90.0383% ( 157) 00:14:00.385 8469.268 - 8519.680: 90.5472% ( 85) 00:14:00.385 8519.680 - 8570.092: 91.1339% ( 98) 00:14:00.385 8570.092 - 8620.505: 91.4452% ( 52) 00:14:00.385 8620.505 - 8670.917: 91.7146% ( 45) 00:14:00.385 8670.917 - 8721.329: 91.9241% ( 35) 00:14:00.385 8721.329 - 8771.742: 92.0797% ( 26) 00:14:00.385 8771.742 - 8822.154: 92.2893% ( 35) 00:14:00.385 8822.154 - 8872.566: 92.4569% ( 28) 00:14:00.385 8872.566 - 8922.978: 92.6066% ( 25) 00:14:00.385 8922.978 - 8973.391: 92.8341% ( 38) 00:14:00.385 8973.391 - 9023.803: 93.1394% ( 51) 00:14:00.385 9023.803 - 9074.215: 93.5704% ( 72) 00:14:00.385 9074.215 - 9124.628: 94.0793% ( 85) 00:14:00.385 9124.628 - 9175.040: 94.3606% ( 47) 00:14:00.385 9175.040 - 9225.452: 94.6360% ( 46) 00:14:00.385 9225.452 - 9275.865: 94.9114% ( 46) 00:14:00.385 9275.865 - 9326.277: 95.0850% ( 29) 00:14:00.385 9326.277 - 9376.689: 95.1509% ( 11) 00:14:00.385 9376.689 - 9427.102: 95.2287% ( 13) 00:14:00.385 9427.102 - 9477.514: 95.2826% ( 9) 00:14:00.385 9477.514 - 9527.926: 95.3784% ( 16) 00:14:00.385 9527.926 - 9578.338: 95.4921% ( 19) 00:14:00.385 9578.338 - 9628.751: 95.6178% ( 21) 00:14:00.385 9628.751 - 9679.163: 95.7615% ( 24) 00:14:00.385 9679.163 - 9729.575: 96.0249% ( 44) 00:14:00.385 9729.575 - 9779.988: 96.1925% ( 28) 00:14:00.385 9779.988 - 9830.400: 96.2584% ( 11) 00:14:00.385 9830.400 - 9880.812: 96.3302% ( 12) 00:14:00.385 9880.812 - 9931.225: 96.4080% ( 13) 00:14:00.385 9931.225 - 9981.637: 96.5398% ( 22) 00:14:00.385 9981.637 - 10032.049: 96.6116% ( 12) 00:14:00.385 10032.049 - 10082.462: 96.6834% ( 12) 00:14:00.385 10082.462 - 10132.874: 96.7193% ( 6) 00:14:00.385 10132.874 - 10183.286: 96.7433% ( 4) 00:14:00.385 10183.286 - 10233.698: 96.7672% ( 4) 00:14:00.385 10233.698 - 10284.111: 96.7852% ( 3) 00:14:00.385 10284.111 - 10334.523: 96.8032% ( 3) 00:14:00.385 10334.523 - 10384.935: 96.8211% ( 3) 00:14:00.385 10384.935 - 10435.348: 96.8511% ( 5) 00:14:00.385 10435.348 - 10485.760: 96.8989% ( 8) 00:14:00.385 10485.760 - 10536.172: 96.9528% ( 9) 00:14:00.385 10536.172 - 10586.585: 97.0187% ( 11) 00:14:00.386 10586.585 - 10636.997: 97.0965% ( 13) 00:14:00.386 10636.997 - 10687.409: 97.2043% ( 18) 00:14:00.386 10687.409 - 10737.822: 97.3300% ( 21) 00:14:00.386 10737.822 - 10788.234: 97.4318% ( 17) 00:14:00.386 10788.234 - 10838.646: 97.5515% ( 20) 00:14:00.386 10838.646 - 10889.058: 97.6353% ( 14) 00:14:00.386 10889.058 - 10939.471: 97.7011% ( 11) 00:14:00.386 10939.471 - 10989.883: 97.7670% ( 11) 00:14:00.386 10989.883 - 11040.295: 97.8448% ( 13) 00:14:00.386 11040.295 - 11090.708: 97.9107% ( 11) 00:14:00.386 11090.708 - 11141.120: 98.0484% ( 23) 00:14:00.386 11141.120 - 11191.532: 98.0723% ( 4) 00:14:00.386 11191.532 - 11241.945: 98.0843% ( 2) 00:14:00.386 11544.418 - 11594.831: 98.1082% ( 4) 00:14:00.386 11594.831 - 11645.243: 98.1382% ( 5) 00:14:00.386 11645.243 - 11695.655: 98.1681% ( 5) 00:14:00.386 11695.655 - 11746.068: 98.1920% ( 4) 00:14:00.386 11746.068 - 11796.480: 98.2818% ( 15) 00:14:00.386 11796.480 - 11846.892: 98.4016% ( 20) 00:14:00.386 11846.892 - 11897.305: 98.4195% ( 3) 00:14:00.386 11897.305 - 11947.717: 98.4375% ( 3) 00:14:00.386 11947.717 - 11998.129: 98.4495% ( 2) 00:14:00.386 11998.129 - 12048.542: 98.4614% ( 2) 00:14:00.386 12048.542 - 12098.954: 98.4674% ( 1) 00:14:00.386 12250.191 - 12300.603: 98.4734% ( 1) 00:14:00.386 12351.015 - 12401.428: 98.4794% ( 1) 00:14:00.386 12502.252 - 12552.665: 98.4854% ( 1) 00:14:00.386 12603.077 - 12653.489: 98.5034% ( 3) 00:14:00.386 12653.489 - 12703.902: 98.5153% ( 2) 00:14:00.386 12703.902 - 12754.314: 98.5273% ( 2) 00:14:00.386 12754.314 - 12804.726: 98.5333% ( 1) 00:14:00.386 12804.726 - 12855.138: 98.5453% ( 2) 00:14:00.386 12855.138 - 12905.551: 98.5572% ( 2) 00:14:00.386 12905.551 - 13006.375: 98.5812% ( 4) 00:14:00.386 13006.375 - 13107.200: 98.5991% ( 3) 00:14:00.386 13107.200 - 13208.025: 98.6171% ( 3) 00:14:00.386 13208.025 - 13308.849: 98.7189% ( 17) 00:14:00.386 13308.849 - 13409.674: 98.8925% ( 29) 00:14:00.386 13409.674 - 13510.498: 99.0362% ( 24) 00:14:00.386 13510.498 - 13611.323: 99.1020% ( 11) 00:14:00.386 13611.323 - 13712.148: 99.1260% ( 4) 00:14:00.386 13712.148 - 13812.972: 99.1439% ( 3) 00:14:00.386 13812.972 - 13913.797: 99.1619% ( 3) 00:14:00.386 13913.797 - 14014.622: 99.1798% ( 3) 00:14:00.386 14014.622 - 14115.446: 99.1918% ( 2) 00:14:00.386 14115.446 - 14216.271: 99.2098% ( 3) 00:14:00.386 14216.271 - 14317.095: 99.2277% ( 3) 00:14:00.386 14317.095 - 14417.920: 99.2337% ( 1) 00:14:00.386 17745.132 - 17845.957: 99.2397% ( 1) 00:14:00.386 18047.606 - 18148.431: 99.2457% ( 1) 00:14:00.386 18148.431 - 18249.255: 99.2816% ( 6) 00:14:00.386 18249.255 - 18350.080: 99.6169% ( 56) 00:14:00.386 21878.942 - 21979.766: 99.6408% ( 4) 00:14:00.386 21979.766 - 22080.591: 99.9102% ( 45) 00:14:00.386 22080.591 - 22181.415: 99.9761% ( 11) 00:14:00.386 22181.415 - 22282.240: 100.0000% ( 4) 00:14:00.386 00:14:00.386 12:45:00 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:14:00.386 00:14:00.386 real 0m2.453s 00:14:00.386 user 0m2.186s 00:14:00.386 sys 0m0.175s 00:14:00.386 12:45:00 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.386 12:45:00 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:14:00.386 ************************************ 00:14:00.386 END TEST nvme_perf 00:14:00.386 ************************************ 00:14:00.386 12:45:00 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:00.386 12:45:00 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:00.386 12:45:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.386 12:45:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.386 ************************************ 00:14:00.386 START TEST nvme_hello_world 00:14:00.386 ************************************ 00:14:00.386 12:45:00 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:14:00.644 Initializing NVMe Controllers 00:14:00.644 Attached to 0000:00:10.0 00:14:00.644 Namespace ID: 1 size: 6GB 00:14:00.644 Attached to 0000:00:11.0 00:14:00.644 Namespace ID: 1 size: 5GB 00:14:00.644 Attached to 0000:00:13.0 00:14:00.644 Namespace ID: 1 size: 1GB 00:14:00.644 Attached to 0000:00:12.0 00:14:00.644 Namespace ID: 1 size: 4GB 00:14:00.644 Namespace ID: 2 size: 4GB 00:14:00.644 Namespace ID: 3 size: 4GB 00:14:00.644 Initialization complete. 00:14:00.644 INFO: using host memory buffer for IO 00:14:00.644 Hello world! 00:14:00.644 INFO: using host memory buffer for IO 00:14:00.644 Hello world! 00:14:00.644 INFO: using host memory buffer for IO 00:14:00.644 Hello world! 00:14:00.644 INFO: using host memory buffer for IO 00:14:00.644 Hello world! 00:14:00.644 INFO: using host memory buffer for IO 00:14:00.644 Hello world! 00:14:00.644 INFO: using host memory buffer for IO 00:14:00.644 Hello world! 00:14:00.644 00:14:00.644 real 0m0.224s 00:14:00.644 user 0m0.074s 00:14:00.644 sys 0m0.112s 00:14:00.645 12:45:00 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.645 12:45:00 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:00.645 ************************************ 00:14:00.645 END TEST nvme_hello_world 00:14:00.645 ************************************ 00:14:00.645 12:45:00 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:00.645 12:45:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.645 12:45:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.645 12:45:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.645 ************************************ 00:14:00.645 START TEST nvme_sgl 00:14:00.645 ************************************ 00:14:00.645 12:45:00 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:14:00.902 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:14:00.902 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:14:00.902 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:14:00.902 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:14:00.902 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:14:00.902 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:14:00.902 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:14:00.902 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:14:00.902 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:14:00.902 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:14:00.902 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:14:00.902 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:14:00.902 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:14:00.902 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:14:00.902 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:14:00.902 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:14:00.902 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:14:00.902 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:14:00.902 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:14:00.902 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:14:00.902 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:14:00.903 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:14:00.903 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:14:00.903 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:14:00.903 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:14:00.903 NVMe Readv/Writev Request test 00:14:00.903 Attached to 0000:00:10.0 00:14:00.903 Attached to 0000:00:11.0 00:14:00.903 Attached to 0000:00:13.0 00:14:00.903 Attached to 0000:00:12.0 00:14:00.903 0000:00:10.0: build_io_request_2 test passed 00:14:00.903 0000:00:10.0: build_io_request_4 test passed 00:14:00.903 0000:00:10.0: build_io_request_5 test passed 00:14:00.903 0000:00:10.0: build_io_request_6 test passed 00:14:00.903 0000:00:10.0: build_io_request_7 test passed 00:14:00.903 0000:00:10.0: build_io_request_10 test passed 00:14:00.903 0000:00:11.0: build_io_request_2 test passed 00:14:00.903 0000:00:11.0: build_io_request_4 test passed 00:14:00.903 0000:00:11.0: build_io_request_5 test passed 00:14:00.903 0000:00:11.0: build_io_request_6 test passed 00:14:00.903 0000:00:11.0: build_io_request_7 test passed 00:14:00.903 0000:00:11.0: build_io_request_10 test passed 00:14:00.903 Cleaning up... 00:14:00.903 00:14:00.903 real 0m0.268s 00:14:00.903 user 0m0.125s 00:14:00.903 sys 0m0.097s 00:14:00.903 12:45:00 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.903 12:45:00 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:14:00.903 ************************************ 00:14:00.903 END TEST nvme_sgl 00:14:00.903 ************************************ 00:14:00.903 12:45:00 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:00.903 12:45:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.903 12:45:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.903 12:45:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.903 ************************************ 00:14:00.903 START TEST nvme_e2edp 00:14:00.903 ************************************ 00:14:00.903 12:45:00 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:14:01.160 NVMe Write/Read with End-to-End data protection test 00:14:01.160 Attached to 0000:00:10.0 00:14:01.160 Attached to 0000:00:11.0 00:14:01.160 Attached to 0000:00:13.0 00:14:01.160 Attached to 0000:00:12.0 00:14:01.160 Cleaning up... 00:14:01.160 00:14:01.160 real 0m0.204s 00:14:01.160 user 0m0.070s 00:14:01.160 sys 0m0.089s 00:14:01.160 12:45:00 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.160 ************************************ 00:14:01.160 END TEST nvme_e2edp 00:14:01.160 12:45:00 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:14:01.160 ************************************ 00:14:01.160 12:45:00 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:01.160 12:45:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.160 12:45:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.160 12:45:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.160 ************************************ 00:14:01.160 START TEST nvme_reserve 00:14:01.160 ************************************ 00:14:01.160 12:45:00 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:14:01.418 ===================================================== 00:14:01.418 NVMe Controller at PCI bus 0, device 16, function 0 00:14:01.418 ===================================================== 00:14:01.418 Reservations: Not Supported 00:14:01.418 ===================================================== 00:14:01.418 NVMe Controller at PCI bus 0, device 17, function 0 00:14:01.418 ===================================================== 00:14:01.418 Reservations: Not Supported 00:14:01.418 ===================================================== 00:14:01.418 NVMe Controller at PCI bus 0, device 19, function 0 00:14:01.418 ===================================================== 00:14:01.418 Reservations: Not Supported 00:14:01.418 ===================================================== 00:14:01.418 NVMe Controller at PCI bus 0, device 18, function 0 00:14:01.418 ===================================================== 00:14:01.418 Reservations: Not Supported 00:14:01.418 Reservation test passed 00:14:01.418 00:14:01.418 real 0m0.199s 00:14:01.418 user 0m0.070s 00:14:01.418 sys 0m0.081s 00:14:01.418 12:45:01 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.418 12:45:01 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:14:01.418 ************************************ 00:14:01.418 END TEST nvme_reserve 00:14:01.418 ************************************ 00:14:01.418 12:45:01 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:01.418 12:45:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.418 12:45:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.418 12:45:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.418 ************************************ 00:14:01.418 START TEST nvme_err_injection 00:14:01.418 ************************************ 00:14:01.418 12:45:01 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:14:01.676 NVMe Error Injection test 00:14:01.676 Attached to 0000:00:10.0 00:14:01.676 Attached to 0000:00:11.0 00:14:01.676 Attached to 0000:00:13.0 00:14:01.676 Attached to 0000:00:12.0 00:14:01.676 0000:00:10.0: get features failed as expected 00:14:01.676 0000:00:11.0: get features failed as expected 00:14:01.676 0000:00:13.0: get features failed as expected 00:14:01.676 0000:00:12.0: get features failed as expected 00:14:01.676 0000:00:10.0: get features successfully as expected 00:14:01.676 0000:00:11.0: get features successfully as expected 00:14:01.676 0000:00:13.0: get features successfully as expected 00:14:01.676 0000:00:12.0: get features successfully as expected 00:14:01.676 0000:00:10.0: read failed as expected 00:14:01.676 0000:00:11.0: read failed as expected 00:14:01.676 0000:00:13.0: read failed as expected 00:14:01.676 0000:00:12.0: read failed as expected 00:14:01.676 0000:00:11.0: read successfully as expected 00:14:01.676 0000:00:13.0: read successfully as expected 00:14:01.676 0000:00:10.0: read successfully as expected 00:14:01.676 0000:00:12.0: read successfully as expected 00:14:01.676 Cleaning up... 00:14:01.676 00:14:01.676 real 0m0.213s 00:14:01.676 user 0m0.082s 00:14:01.676 sys 0m0.087s 00:14:01.676 12:45:01 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.676 12:45:01 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:14:01.676 ************************************ 00:14:01.676 END TEST nvme_err_injection 00:14:01.676 ************************************ 00:14:01.676 12:45:01 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:01.676 12:45:01 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:14:01.676 12:45:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.676 12:45:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.676 ************************************ 00:14:01.676 START TEST nvme_overhead 00:14:01.676 ************************************ 00:14:01.676 12:45:01 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:14:03.048 Initializing NVMe Controllers 00:14:03.048 Attached to 0000:00:10.0 00:14:03.048 Attached to 0000:00:11.0 00:14:03.048 Attached to 0000:00:13.0 00:14:03.048 Attached to 0000:00:12.0 00:14:03.048 Initialization complete. Launching workers. 00:14:03.048 submit (in ns) avg, min, max = 12035.4, 10630.8, 267411.5 00:14:03.048 complete (in ns) avg, min, max = 7498.3, 7127.7, 81622.3 00:14:03.048 00:14:03.048 Submit histogram 00:14:03.048 ================ 00:14:03.048 Range in us Cumulative Count 00:14:03.048 10.585 - 10.634: 0.0064% ( 1) 00:14:03.048 10.831 - 10.880: 0.0128% ( 1) 00:14:03.048 10.880 - 10.929: 0.0192% ( 1) 00:14:03.048 10.929 - 10.978: 0.0256% ( 1) 00:14:03.048 11.175 - 11.225: 0.0321% ( 1) 00:14:03.048 11.225 - 11.274: 0.0385% ( 1) 00:14:03.048 11.274 - 11.323: 0.0641% ( 4) 00:14:03.048 11.323 - 11.372: 0.2437% ( 28) 00:14:03.048 11.372 - 11.422: 1.1799% ( 146) 00:14:03.048 11.422 - 11.471: 3.8282% ( 413) 00:14:03.048 11.471 - 11.520: 10.0994% ( 978) 00:14:03.048 11.520 - 11.569: 19.3908% ( 1449) 00:14:03.048 11.569 - 11.618: 31.2151% ( 1844) 00:14:03.048 11.618 - 11.668: 42.5585% ( 1769) 00:14:03.048 11.668 - 11.717: 51.9525% ( 1465) 00:14:03.048 11.717 - 11.766: 59.1023% ( 1115) 00:14:03.048 11.766 - 11.815: 64.7579% ( 882) 00:14:03.048 11.815 - 11.865: 68.8426% ( 637) 00:14:03.048 11.865 - 11.914: 72.1898% ( 522) 00:14:03.048 11.914 - 11.963: 74.8894% ( 421) 00:14:03.048 11.963 - 12.012: 77.1080% ( 346) 00:14:03.048 12.012 - 12.062: 78.9869% ( 293) 00:14:03.048 12.062 - 12.111: 80.4296% ( 225) 00:14:03.048 12.111 - 12.160: 81.6351% ( 188) 00:14:03.048 12.160 - 12.209: 82.7188% ( 169) 00:14:03.048 12.209 - 12.258: 83.6614% ( 147) 00:14:03.048 12.258 - 12.308: 84.5399% ( 137) 00:14:03.048 12.308 - 12.357: 85.6749% ( 177) 00:14:03.048 12.357 - 12.406: 86.7971% ( 175) 00:14:03.048 12.406 - 12.455: 88.0346% ( 193) 00:14:03.048 12.455 - 12.505: 89.4133% ( 215) 00:14:03.048 12.505 - 12.554: 90.5354% ( 175) 00:14:03.048 12.554 - 12.603: 91.5165% ( 153) 00:14:03.048 12.603 - 12.702: 93.3120% ( 280) 00:14:03.048 12.702 - 12.800: 94.4598% ( 179) 00:14:03.048 12.800 - 12.898: 95.0625% ( 94) 00:14:03.048 12.898 - 12.997: 95.5178% ( 71) 00:14:03.048 12.997 - 13.095: 95.7871% ( 42) 00:14:03.048 13.095 - 13.194: 95.9410% ( 24) 00:14:03.048 13.194 - 13.292: 96.0115% ( 11) 00:14:03.048 13.292 - 13.391: 96.0693% ( 9) 00:14:03.048 13.391 - 13.489: 96.1013% ( 5) 00:14:03.048 13.489 - 13.588: 96.1718% ( 11) 00:14:03.048 13.588 - 13.686: 96.2488% ( 12) 00:14:03.048 13.686 - 13.785: 96.3386% ( 14) 00:14:03.048 13.785 - 13.883: 96.4283% ( 14) 00:14:03.048 13.883 - 13.982: 96.4925% ( 10) 00:14:03.048 13.982 - 14.080: 96.6656% ( 27) 00:14:03.048 14.080 - 14.178: 96.8259% ( 25) 00:14:03.048 14.178 - 14.277: 96.9285% ( 16) 00:14:03.048 14.277 - 14.375: 97.0824% ( 24) 00:14:03.048 14.375 - 14.474: 97.1593% ( 12) 00:14:03.048 14.474 - 14.572: 97.2235% ( 10) 00:14:03.048 14.572 - 14.671: 97.2876% ( 10) 00:14:03.048 14.671 - 14.769: 97.3197% ( 5) 00:14:03.048 14.769 - 14.868: 97.3581% ( 6) 00:14:03.048 14.868 - 14.966: 97.4415% ( 13) 00:14:03.048 14.966 - 15.065: 97.5248% ( 13) 00:14:03.048 15.065 - 15.163: 97.5761% ( 8) 00:14:03.048 15.163 - 15.262: 97.6018% ( 4) 00:14:03.048 15.262 - 15.360: 97.6531% ( 8) 00:14:03.048 15.360 - 15.458: 97.7172% ( 10) 00:14:03.048 15.458 - 15.557: 97.7429% ( 4) 00:14:03.048 15.557 - 15.655: 97.7813% ( 6) 00:14:03.048 15.655 - 15.754: 97.8134% ( 5) 00:14:03.048 15.754 - 15.852: 97.8583% ( 7) 00:14:03.048 15.852 - 15.951: 97.8775% ( 3) 00:14:03.048 15.951 - 16.049: 97.9032% ( 4) 00:14:03.048 16.049 - 16.148: 97.9224% ( 3) 00:14:03.048 16.148 - 16.246: 97.9545% ( 5) 00:14:03.048 16.246 - 16.345: 97.9929% ( 6) 00:14:03.048 16.345 - 16.443: 98.0314% ( 6) 00:14:03.048 16.443 - 16.542: 98.0507% ( 3) 00:14:03.048 16.542 - 16.640: 98.0571% ( 1) 00:14:03.048 16.640 - 16.738: 98.0955% ( 6) 00:14:03.048 16.738 - 16.837: 98.1212% ( 4) 00:14:03.048 16.837 - 16.935: 98.1533% ( 5) 00:14:03.048 16.935 - 17.034: 98.1789% ( 4) 00:14:03.048 17.034 - 17.132: 98.2174% ( 6) 00:14:03.048 17.132 - 17.231: 98.2623% ( 7) 00:14:03.048 17.231 - 17.329: 98.3649% ( 16) 00:14:03.048 17.329 - 17.428: 98.4290% ( 10) 00:14:03.048 17.428 - 17.526: 98.5188% ( 14) 00:14:03.048 17.526 - 17.625: 98.5765% ( 9) 00:14:03.048 17.625 - 17.723: 98.6470% ( 11) 00:14:03.048 17.723 - 17.822: 98.7175% ( 11) 00:14:03.049 17.822 - 17.920: 98.8073% ( 14) 00:14:03.049 17.920 - 18.018: 98.9099% ( 16) 00:14:03.049 18.018 - 18.117: 99.0253% ( 18) 00:14:03.049 18.117 - 18.215: 99.0895% ( 10) 00:14:03.049 18.215 - 18.314: 99.1792% ( 14) 00:14:03.049 18.314 - 18.412: 99.2369% ( 9) 00:14:03.049 18.412 - 18.511: 99.2754% ( 6) 00:14:03.049 18.511 - 18.609: 99.3331% ( 9) 00:14:03.049 18.609 - 18.708: 99.3844% ( 8) 00:14:03.049 18.708 - 18.806: 99.4485% ( 10) 00:14:03.049 18.806 - 18.905: 99.4806% ( 5) 00:14:03.049 18.905 - 19.003: 99.5319% ( 8) 00:14:03.049 19.003 - 19.102: 99.5447% ( 2) 00:14:03.049 19.102 - 19.200: 99.5511% ( 1) 00:14:03.049 19.200 - 19.298: 99.5704% ( 3) 00:14:03.049 19.298 - 19.397: 99.5960% ( 4) 00:14:03.049 19.397 - 19.495: 99.6024% ( 1) 00:14:03.049 19.495 - 19.594: 99.6217% ( 3) 00:14:03.049 19.594 - 19.692: 99.6345% ( 2) 00:14:03.049 19.692 - 19.791: 99.6473% ( 2) 00:14:03.049 19.791 - 19.889: 99.6601% ( 2) 00:14:03.049 19.889 - 19.988: 99.6794% ( 3) 00:14:03.049 20.086 - 20.185: 99.6986% ( 3) 00:14:03.049 20.185 - 20.283: 99.7114% ( 2) 00:14:03.049 20.382 - 20.480: 99.7243% ( 2) 00:14:03.049 20.578 - 20.677: 99.7435% ( 3) 00:14:03.049 20.775 - 20.874: 99.7563% ( 2) 00:14:03.049 20.972 - 21.071: 99.7692% ( 2) 00:14:03.049 21.169 - 21.268: 99.7756% ( 1) 00:14:03.049 21.268 - 21.366: 99.7820% ( 1) 00:14:03.049 21.366 - 21.465: 99.7884% ( 1) 00:14:03.049 21.465 - 21.563: 99.7948% ( 1) 00:14:03.049 21.858 - 21.957: 99.8012% ( 1) 00:14:03.049 21.957 - 22.055: 99.8076% ( 1) 00:14:03.049 22.351 - 22.449: 99.8205% ( 2) 00:14:03.049 22.449 - 22.548: 99.8333% ( 2) 00:14:03.049 22.745 - 22.843: 99.8397% ( 1) 00:14:03.049 22.942 - 23.040: 99.8525% ( 2) 00:14:03.049 23.040 - 23.138: 99.8589% ( 1) 00:14:03.049 23.138 - 23.237: 99.8653% ( 1) 00:14:03.049 23.335 - 23.434: 99.8782% ( 2) 00:14:03.049 23.828 - 23.926: 99.8910% ( 2) 00:14:03.049 23.926 - 24.025: 99.8974% ( 1) 00:14:03.049 24.025 - 24.123: 99.9038% ( 1) 00:14:03.049 24.320 - 24.418: 99.9102% ( 1) 00:14:03.049 25.206 - 25.403: 99.9166% ( 1) 00:14:03.049 26.191 - 26.388: 99.9231% ( 1) 00:14:03.049 29.342 - 29.538: 99.9295% ( 1) 00:14:03.049 37.415 - 37.612: 99.9359% ( 1) 00:14:03.049 41.748 - 41.945: 99.9423% ( 1) 00:14:03.049 41.945 - 42.142: 99.9487% ( 1) 00:14:03.049 42.929 - 43.126: 99.9551% ( 1) 00:14:03.049 44.702 - 44.898: 99.9615% ( 1) 00:14:03.049 46.277 - 46.474: 99.9679% ( 1) 00:14:03.049 46.868 - 47.065: 99.9744% ( 1) 00:14:03.049 47.262 - 47.458: 99.9808% ( 1) 00:14:03.049 51.594 - 51.988: 99.9872% ( 1) 00:14:03.049 59.865 - 60.258: 99.9936% ( 1) 00:14:03.049 266.240 - 267.815: 100.0000% ( 1) 00:14:03.049 00:14:03.049 Complete histogram 00:14:03.049 ================== 00:14:03.049 Range in us Cumulative Count 00:14:03.049 7.089 - 7.138: 0.0128% ( 2) 00:14:03.049 7.138 - 7.188: 0.3334% ( 50) 00:14:03.049 7.188 - 7.237: 5.4697% ( 801) 00:14:03.049 7.237 - 7.286: 23.5204% ( 2815) 00:14:03.049 7.286 - 7.335: 47.3421% ( 3715) 00:14:03.049 7.335 - 7.385: 66.2841% ( 2954) 00:14:03.049 7.385 - 7.434: 78.3713% ( 1885) 00:14:03.049 7.434 - 7.483: 85.8609% ( 1168) 00:14:03.049 7.483 - 7.532: 90.2148% ( 679) 00:14:03.049 7.532 - 7.582: 92.9080% ( 420) 00:14:03.049 7.582 - 7.631: 94.2866% ( 215) 00:14:03.049 7.631 - 7.680: 95.1459% ( 134) 00:14:03.049 7.680 - 7.729: 95.6204% ( 74) 00:14:03.049 7.729 - 7.778: 95.9474% ( 51) 00:14:03.049 7.778 - 7.828: 96.1462% ( 31) 00:14:03.049 7.828 - 7.877: 96.2873% ( 22) 00:14:03.049 7.877 - 7.926: 96.4155% ( 20) 00:14:03.049 7.926 - 7.975: 96.5181% ( 16) 00:14:03.049 7.975 - 8.025: 96.5758% ( 9) 00:14:03.049 8.025 - 8.074: 96.6464% ( 11) 00:14:03.049 8.074 - 8.123: 96.7169% ( 11) 00:14:03.049 8.123 - 8.172: 96.8003% ( 13) 00:14:03.049 8.172 - 8.222: 96.8580% ( 9) 00:14:03.049 8.222 - 8.271: 96.9606% ( 16) 00:14:03.049 8.271 - 8.320: 97.1145% ( 24) 00:14:03.049 8.320 - 8.369: 97.3197% ( 32) 00:14:03.049 8.369 - 8.418: 97.4543% ( 21) 00:14:03.049 8.418 - 8.468: 97.6274% ( 27) 00:14:03.049 8.468 - 8.517: 97.7365% ( 17) 00:14:03.049 8.517 - 8.566: 97.8647% ( 20) 00:14:03.049 8.566 - 8.615: 97.9288% ( 10) 00:14:03.049 8.615 - 8.665: 97.9929% ( 10) 00:14:03.049 8.665 - 8.714: 98.0827% ( 14) 00:14:03.049 8.714 - 8.763: 98.1148% ( 5) 00:14:03.049 8.763 - 8.812: 98.1404% ( 4) 00:14:03.049 8.812 - 8.862: 98.1661% ( 4) 00:14:03.049 8.862 - 8.911: 98.1853% ( 3) 00:14:03.049 8.911 - 8.960: 98.1981% ( 2) 00:14:03.049 8.960 - 9.009: 98.2110% ( 2) 00:14:03.049 9.009 - 9.058: 98.2238% ( 2) 00:14:03.049 9.058 - 9.108: 98.2302% ( 1) 00:14:03.049 9.108 - 9.157: 98.2366% ( 1) 00:14:03.049 9.305 - 9.354: 98.2559% ( 3) 00:14:03.049 9.354 - 9.403: 98.2623% ( 1) 00:14:03.049 9.403 - 9.452: 98.2687% ( 1) 00:14:03.049 9.502 - 9.551: 98.2815% ( 2) 00:14:03.049 9.551 - 9.600: 98.2879% ( 1) 00:14:03.049 9.600 - 9.649: 98.2943% ( 1) 00:14:03.049 9.649 - 9.698: 98.3007% ( 1) 00:14:03.049 9.698 - 9.748: 98.3136% ( 2) 00:14:03.049 9.748 - 9.797: 98.3200% ( 1) 00:14:03.049 9.797 - 9.846: 98.3264% ( 1) 00:14:03.049 9.846 - 9.895: 98.3392% ( 2) 00:14:03.049 9.895 - 9.945: 98.3456% ( 1) 00:14:03.049 9.945 - 9.994: 98.3584% ( 2) 00:14:03.049 9.994 - 10.043: 98.3649% ( 1) 00:14:03.049 10.043 - 10.092: 98.3841% ( 3) 00:14:03.049 10.142 - 10.191: 98.3905% ( 1) 00:14:03.049 10.191 - 10.240: 98.3969% ( 1) 00:14:03.049 10.240 - 10.289: 98.4033% ( 1) 00:14:03.049 10.289 - 10.338: 98.4162% ( 2) 00:14:03.049 10.338 - 10.388: 98.4290% ( 2) 00:14:03.049 10.388 - 10.437: 98.4418% ( 2) 00:14:03.049 10.437 - 10.486: 98.4546% ( 2) 00:14:03.049 10.486 - 10.535: 98.4610% ( 1) 00:14:03.049 10.585 - 10.634: 98.4675% ( 1) 00:14:03.049 10.683 - 10.732: 98.4739% ( 1) 00:14:03.049 10.782 - 10.831: 98.4931% ( 3) 00:14:03.049 10.831 - 10.880: 98.5188% ( 4) 00:14:03.049 10.880 - 10.929: 98.5252% ( 1) 00:14:03.049 10.978 - 11.028: 98.5444% ( 3) 00:14:03.049 11.028 - 11.077: 98.5508% ( 1) 00:14:03.049 11.077 - 11.126: 98.5572% ( 1) 00:14:03.049 11.126 - 11.175: 98.5701% ( 2) 00:14:03.049 11.225 - 11.274: 98.5765% ( 1) 00:14:03.049 11.520 - 11.569: 98.5829% ( 1) 00:14:03.049 11.766 - 11.815: 98.5957% ( 2) 00:14:03.049 11.963 - 12.012: 98.6021% ( 1) 00:14:03.049 12.308 - 12.357: 98.6085% ( 1) 00:14:03.049 12.357 - 12.406: 98.6149% ( 1) 00:14:03.049 12.406 - 12.455: 98.6214% ( 1) 00:14:03.049 12.455 - 12.505: 98.6278% ( 1) 00:14:03.049 12.554 - 12.603: 98.6342% ( 1) 00:14:03.049 12.603 - 12.702: 98.6470% ( 2) 00:14:03.049 12.702 - 12.800: 98.6534% ( 1) 00:14:03.049 12.800 - 12.898: 98.6983% ( 7) 00:14:03.049 12.898 - 12.997: 98.7560% ( 9) 00:14:03.049 12.997 - 13.095: 98.7881% ( 5) 00:14:03.049 13.095 - 13.194: 98.8201% ( 5) 00:14:03.049 13.194 - 13.292: 98.8778% ( 9) 00:14:03.049 13.292 - 13.391: 98.9548% ( 12) 00:14:03.049 13.391 - 13.489: 98.9612% ( 1) 00:14:03.049 13.489 - 13.588: 99.0125% ( 8) 00:14:03.049 13.588 - 13.686: 99.1151% ( 16) 00:14:03.049 13.686 - 13.785: 99.1472% ( 5) 00:14:03.049 13.785 - 13.883: 99.2049% ( 9) 00:14:03.049 13.883 - 13.982: 99.2626% ( 9) 00:14:03.049 13.982 - 14.080: 99.3267% ( 10) 00:14:03.049 14.080 - 14.178: 99.4101% ( 13) 00:14:03.049 14.178 - 14.277: 99.4550% ( 7) 00:14:03.049 14.277 - 14.375: 99.5191% ( 10) 00:14:03.049 14.375 - 14.474: 99.5447% ( 4) 00:14:03.049 14.474 - 14.572: 99.5704% ( 4) 00:14:03.049 14.572 - 14.671: 99.6153% ( 7) 00:14:03.049 14.671 - 14.769: 99.6345% ( 3) 00:14:03.049 14.769 - 14.868: 99.6473% ( 2) 00:14:03.049 14.868 - 14.966: 99.6922% ( 7) 00:14:03.049 14.966 - 15.065: 99.7307% ( 6) 00:14:03.049 15.065 - 15.163: 99.7435% ( 2) 00:14:03.049 15.262 - 15.360: 99.7563% ( 2) 00:14:03.049 15.360 - 15.458: 99.7627% ( 1) 00:14:03.049 15.458 - 15.557: 99.7692% ( 1) 00:14:03.049 15.655 - 15.754: 99.7820% ( 2) 00:14:03.049 15.754 - 15.852: 99.7884% ( 1) 00:14:03.049 15.852 - 15.951: 99.8012% ( 2) 00:14:03.049 16.148 - 16.246: 99.8076% ( 1) 00:14:03.049 16.542 - 16.640: 99.8140% ( 1) 00:14:03.049 16.640 - 16.738: 99.8205% ( 1) 00:14:03.049 16.738 - 16.837: 99.8269% ( 1) 00:14:03.049 16.837 - 16.935: 99.8333% ( 1) 00:14:03.049 17.625 - 17.723: 99.8461% ( 2) 00:14:03.049 17.723 - 17.822: 99.8525% ( 1) 00:14:03.049 17.822 - 17.920: 99.8589% ( 1) 00:14:03.049 18.018 - 18.117: 99.8653% ( 1) 00:14:03.049 18.117 - 18.215: 99.8782% ( 2) 00:14:03.050 18.215 - 18.314: 99.8846% ( 1) 00:14:03.050 18.412 - 18.511: 99.8910% ( 1) 00:14:03.050 18.806 - 18.905: 99.8974% ( 1) 00:14:03.050 19.200 - 19.298: 99.9038% ( 1) 00:14:03.050 19.397 - 19.495: 99.9102% ( 1) 00:14:03.050 19.495 - 19.594: 99.9166% ( 1) 00:14:03.050 19.594 - 19.692: 99.9231% ( 1) 00:14:03.050 19.791 - 19.889: 99.9295% ( 1) 00:14:03.050 19.889 - 19.988: 99.9359% ( 1) 00:14:03.050 20.086 - 20.185: 99.9423% ( 1) 00:14:03.050 20.185 - 20.283: 99.9487% ( 1) 00:14:03.050 20.677 - 20.775: 99.9551% ( 1) 00:14:03.050 21.169 - 21.268: 99.9615% ( 1) 00:14:03.050 23.729 - 23.828: 99.9679% ( 1) 00:14:03.050 28.357 - 28.554: 99.9744% ( 1) 00:14:03.050 31.705 - 31.902: 99.9808% ( 1) 00:14:03.050 45.686 - 45.883: 99.9872% ( 1) 00:14:03.050 52.382 - 52.775: 99.9936% ( 1) 00:14:03.050 81.526 - 81.920: 100.0000% ( 1) 00:14:03.050 00:14:03.050 00:14:03.050 real 0m1.207s 00:14:03.050 user 0m1.070s 00:14:03.050 sys 0m0.090s 00:14:03.050 12:45:02 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.050 12:45:02 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:14:03.050 ************************************ 00:14:03.050 END TEST nvme_overhead 00:14:03.050 ************************************ 00:14:03.050 12:45:02 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:03.050 12:45:02 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:03.050 12:45:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.050 12:45:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.050 ************************************ 00:14:03.050 START TEST nvme_arbitration 00:14:03.050 ************************************ 00:14:03.050 12:45:02 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:14:06.327 Initializing NVMe Controllers 00:14:06.327 Attached to 0000:00:10.0 00:14:06.327 Attached to 0000:00:11.0 00:14:06.327 Attached to 0000:00:13.0 00:14:06.327 Attached to 0000:00:12.0 00:14:06.327 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:14:06.327 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:14:06.327 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:14:06.327 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:14:06.327 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:14:06.327 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:14:06.327 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:06.327 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:14:06.327 Initialization complete. Launching workers. 00:14:06.327 Starting thread on core 1 with urgent priority queue 00:14:06.327 Starting thread on core 2 with urgent priority queue 00:14:06.327 Starting thread on core 3 with urgent priority queue 00:14:06.327 Starting thread on core 0 with urgent priority queue 00:14:06.327 QEMU NVMe Ctrl (12340 ) core 0: 5748.33 IO/s 17.40 secs/100000 ios 00:14:06.328 QEMU NVMe Ctrl (12342 ) core 0: 5780.33 IO/s 17.30 secs/100000 ios 00:14:06.328 QEMU NVMe Ctrl (12341 ) core 1: 5891.67 IO/s 16.97 secs/100000 ios 00:14:06.328 QEMU NVMe Ctrl (12342 ) core 1: 5834.00 IO/s 17.14 secs/100000 ios 00:14:06.328 QEMU NVMe Ctrl (12343 ) core 2: 5607.00 IO/s 17.83 secs/100000 ios 00:14:06.328 QEMU NVMe Ctrl (12342 ) core 3: 5639.00 IO/s 17.73 secs/100000 ios 00:14:06.328 ======================================================== 00:14:06.328 00:14:06.328 00:14:06.328 real 0m3.226s 00:14:06.328 user 0m9.019s 00:14:06.328 sys 0m0.115s 00:14:06.328 12:45:05 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.328 12:45:05 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:14:06.328 ************************************ 00:14:06.328 END TEST nvme_arbitration 00:14:06.328 ************************************ 00:14:06.328 12:45:05 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:06.328 12:45:05 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:06.328 12:45:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.328 12:45:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.328 ************************************ 00:14:06.328 START TEST nvme_single_aen 00:14:06.328 ************************************ 00:14:06.328 12:45:05 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:14:06.586 Asynchronous Event Request test 00:14:06.586 Attached to 0000:00:10.0 00:14:06.586 Attached to 0000:00:11.0 00:14:06.586 Attached to 0000:00:13.0 00:14:06.586 Attached to 0000:00:12.0 00:14:06.586 Reset controller to setup AER completions for this process 00:14:06.586 Registering asynchronous event callbacks... 00:14:06.586 Getting orig temperature thresholds of all controllers 00:14:06.586 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:06.586 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:06.586 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:06.586 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:06.586 Setting all controllers temperature threshold low to trigger AER 00:14:06.586 Waiting for all controllers temperature threshold to be set lower 00:14:06.586 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:06.586 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:06.586 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:06.586 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:06.586 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:06.586 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:06.586 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:06.586 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:06.586 Waiting for all controllers to trigger AER and reset threshold 00:14:06.586 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:06.586 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:06.586 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:06.586 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:06.586 Cleaning up... 00:14:06.586 00:14:06.586 real 0m0.216s 00:14:06.586 user 0m0.085s 00:14:06.586 sys 0m0.087s 00:14:06.586 12:45:06 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.586 12:45:06 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:14:06.586 ************************************ 00:14:06.586 END TEST nvme_single_aen 00:14:06.586 ************************************ 00:14:06.586 12:45:06 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:14:06.586 12:45:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:06.586 12:45:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.586 12:45:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:06.586 ************************************ 00:14:06.586 START TEST nvme_doorbell_aers 00:14:06.586 ************************************ 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:06.586 12:45:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:06.843 [2024-12-05 12:45:06.470011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:16.814 Executing: test_write_invalid_db 00:14:16.814 Waiting for AER completion... 00:14:16.814 Failure: test_write_invalid_db 00:14:16.814 00:14:16.814 Executing: test_invalid_db_write_overflow_sq 00:14:16.814 Waiting for AER completion... 00:14:16.814 Failure: test_invalid_db_write_overflow_sq 00:14:16.814 00:14:16.814 Executing: test_invalid_db_write_overflow_cq 00:14:16.814 Waiting for AER completion... 00:14:16.814 Failure: test_invalid_db_write_overflow_cq 00:14:16.814 00:14:16.814 12:45:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:16.814 12:45:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:14:16.814 [2024-12-05 12:45:16.509796] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:26.760 Executing: test_write_invalid_db 00:14:26.760 Waiting for AER completion... 00:14:26.760 Failure: test_write_invalid_db 00:14:26.760 00:14:26.760 Executing: test_invalid_db_write_overflow_sq 00:14:26.760 Waiting for AER completion... 00:14:26.760 Failure: test_invalid_db_write_overflow_sq 00:14:26.760 00:14:26.761 Executing: test_invalid_db_write_overflow_cq 00:14:26.761 Waiting for AER completion... 00:14:26.761 Failure: test_invalid_db_write_overflow_cq 00:14:26.761 00:14:26.761 12:45:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:26.761 12:45:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:14:26.761 [2024-12-05 12:45:26.539993] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:36.716 Executing: test_write_invalid_db 00:14:36.716 Waiting for AER completion... 00:14:36.716 Failure: test_write_invalid_db 00:14:36.716 00:14:36.716 Executing: test_invalid_db_write_overflow_sq 00:14:36.716 Waiting for AER completion... 00:14:36.716 Failure: test_invalid_db_write_overflow_sq 00:14:36.716 00:14:36.716 Executing: test_invalid_db_write_overflow_cq 00:14:36.716 Waiting for AER completion... 00:14:36.716 Failure: test_invalid_db_write_overflow_cq 00:14:36.716 00:14:36.716 12:45:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:14:36.716 12:45:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:14:36.716 [2024-12-05 12:45:36.558229] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.673 Executing: test_write_invalid_db 00:14:46.673 Waiting for AER completion... 00:14:46.673 Failure: test_write_invalid_db 00:14:46.673 00:14:46.673 Executing: test_invalid_db_write_overflow_sq 00:14:46.673 Waiting for AER completion... 00:14:46.673 Failure: test_invalid_db_write_overflow_sq 00:14:46.673 00:14:46.673 Executing: test_invalid_db_write_overflow_cq 00:14:46.673 Waiting for AER completion... 00:14:46.673 Failure: test_invalid_db_write_overflow_cq 00:14:46.673 00:14:46.673 00:14:46.673 real 0m40.193s 00:14:46.673 user 0m34.333s 00:14:46.673 sys 0m5.499s 00:14:46.673 12:45:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.673 12:45:46 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:14:46.673 ************************************ 00:14:46.673 END TEST nvme_doorbell_aers 00:14:46.673 ************************************ 00:14:46.673 12:45:46 nvme -- nvme/nvme.sh@97 -- # uname 00:14:46.673 12:45:46 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:14:46.673 12:45:46 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:46.673 12:45:46 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:14:46.673 12:45:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.673 12:45:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.673 ************************************ 00:14:46.673 START TEST nvme_multi_aen 00:14:46.673 ************************************ 00:14:46.673 12:45:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:14:46.931 [2024-12-05 12:45:46.665438] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.665517] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.665535] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.666854] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.666884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.666894] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.667939] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.667968] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.667978] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.669137] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.669243] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 [2024-12-05 12:45:46.669317] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75249) is not found. Dropping the request. 00:14:46.931 Child process pid: 75769 00:14:47.189 [Child] Asynchronous Event Request test 00:14:47.189 [Child] Attached to 0000:00:10.0 00:14:47.189 [Child] Attached to 0000:00:11.0 00:14:47.189 [Child] Attached to 0000:00:13.0 00:14:47.189 [Child] Attached to 0000:00:12.0 00:14:47.189 [Child] Registering asynchronous event callbacks... 00:14:47.189 [Child] Getting orig temperature thresholds of all controllers 00:14:47.189 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:47.189 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:47.189 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:47.189 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:47.189 [Child] Waiting for all controllers to trigger AER and reset threshold 00:14:47.189 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:47.189 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:47.189 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:47.189 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:47.189 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.189 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.189 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.189 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.189 [Child] Cleaning up... 00:14:47.189 Asynchronous Event Request test 00:14:47.189 Attached to 0000:00:10.0 00:14:47.189 Attached to 0000:00:11.0 00:14:47.189 Attached to 0000:00:13.0 00:14:47.189 Attached to 0000:00:12.0 00:14:47.189 Reset controller to setup AER completions for this process 00:14:47.189 Registering asynchronous event callbacks... 00:14:47.189 Getting orig temperature thresholds of all controllers 00:14:47.189 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:47.189 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:47.189 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:47.189 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:14:47.189 Setting all controllers temperature threshold low to trigger AER 00:14:47.189 Waiting for all controllers temperature threshold to be set lower 00:14:47.189 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:47.189 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:14:47.189 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:47.189 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:14:47.189 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:47.189 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:14:47.189 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:14:47.189 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:14:47.189 Waiting for all controllers to trigger AER and reset threshold 00:14:47.189 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.189 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.189 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.189 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:14:47.189 Cleaning up... 00:14:47.189 00:14:47.189 real 0m0.462s 00:14:47.189 user 0m0.135s 00:14:47.189 sys 0m0.211s 00:14:47.189 12:45:46 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.189 12:45:46 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:14:47.189 ************************************ 00:14:47.189 END TEST nvme_multi_aen 00:14:47.189 ************************************ 00:14:47.189 12:45:46 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:47.189 12:45:46 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:47.189 12:45:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.189 12:45:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.189 ************************************ 00:14:47.189 START TEST nvme_startup 00:14:47.189 ************************************ 00:14:47.189 12:45:46 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:14:47.447 Initializing NVMe Controllers 00:14:47.447 Attached to 0000:00:10.0 00:14:47.447 Attached to 0000:00:11.0 00:14:47.447 Attached to 0000:00:13.0 00:14:47.447 Attached to 0000:00:12.0 00:14:47.447 Initialization complete. 00:14:47.447 Time used:147384.500 (us). 00:14:47.447 00:14:47.447 real 0m0.208s 00:14:47.447 user 0m0.071s 00:14:47.447 sys 0m0.091s 00:14:47.447 12:45:47 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.447 12:45:47 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:14:47.447 ************************************ 00:14:47.447 END TEST nvme_startup 00:14:47.447 ************************************ 00:14:47.447 12:45:47 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:14:47.447 12:45:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:47.447 12:45:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.447 12:45:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.447 ************************************ 00:14:47.447 START TEST nvme_multi_secondary 00:14:47.447 ************************************ 00:14:47.447 12:45:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:14:47.447 12:45:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75824 00:14:47.447 12:45:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:14:47.447 12:45:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75825 00:14:47.447 12:45:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:14:47.447 12:45:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:50.743 Initializing NVMe Controllers 00:14:50.743 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:50.743 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:50.743 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:50.743 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:50.743 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:50.743 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:50.743 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:50.743 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:50.744 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:50.744 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:50.744 Initialization complete. Launching workers. 00:14:50.744 ======================================================== 00:14:50.744 Latency(us) 00:14:50.744 Device Information : IOPS MiB/s Average min max 00:14:50.744 PCIE (0000:00:10.0) NSID 1 from core 2: 2894.02 11.30 5526.27 918.96 21155.48 00:14:50.744 PCIE (0000:00:11.0) NSID 1 from core 2: 2894.02 11.30 5528.37 911.41 18946.40 00:14:50.744 PCIE (0000:00:13.0) NSID 1 from core 2: 2894.02 11.30 5527.21 862.56 20507.54 00:14:50.744 PCIE (0000:00:12.0) NSID 1 from core 2: 2894.02 11.30 5527.83 863.29 23763.05 00:14:50.744 PCIE (0000:00:12.0) NSID 2 from core 2: 2894.02 11.30 5528.47 891.44 19958.56 00:14:50.744 PCIE (0000:00:12.0) NSID 3 from core 2: 2894.02 11.30 5529.06 926.56 20325.73 00:14:50.744 ======================================================== 00:14:50.744 Total : 17364.10 67.83 5527.87 862.56 23763.05 00:14:50.744 00:14:50.744 12:45:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75824 00:14:50.744 Initializing NVMe Controllers 00:14:50.744 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:50.744 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:50.744 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:50.744 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:50.744 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:50.744 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:50.744 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:50.744 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:50.744 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:50.744 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:50.744 Initialization complete. Launching workers. 00:14:50.744 ======================================================== 00:14:50.744 Latency(us) 00:14:50.744 Device Information : IOPS MiB/s Average min max 00:14:50.744 PCIE (0000:00:10.0) NSID 1 from core 1: 6738.95 26.32 2372.66 980.71 11588.02 00:14:50.744 PCIE (0000:00:11.0) NSID 1 from core 1: 6738.95 26.32 2373.79 1001.99 11297.18 00:14:50.744 PCIE (0000:00:13.0) NSID 1 from core 1: 6738.95 26.32 2373.76 1009.70 13218.00 00:14:50.744 PCIE (0000:00:12.0) NSID 1 from core 1: 6738.95 26.32 2373.74 1008.69 12021.86 00:14:50.744 PCIE (0000:00:12.0) NSID 2 from core 1: 6738.95 26.32 2373.67 991.41 11390.50 00:14:50.744 PCIE (0000:00:12.0) NSID 3 from core 1: 6738.95 26.32 2373.56 903.76 11433.85 00:14:50.744 ======================================================== 00:14:50.744 Total : 40433.67 157.94 2373.53 903.76 13218.00 00:14:50.744 00:14:52.640 Initializing NVMe Controllers 00:14:52.640 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:52.640 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:52.640 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:52.640 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:52.640 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:52.640 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:52.640 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:52.640 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:52.640 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:52.640 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:52.640 Initialization complete. Launching workers. 00:14:52.640 ======================================================== 00:14:52.640 Latency(us) 00:14:52.640 Device Information : IOPS MiB/s Average min max 00:14:52.640 PCIE (0000:00:10.0) NSID 1 from core 0: 8926.52 34.87 1791.05 292.19 12564.63 00:14:52.640 PCIE (0000:00:11.0) NSID 1 from core 0: 9093.12 35.52 1759.14 209.29 12451.85 00:14:52.640 PCIE (0000:00:13.0) NSID 1 from core 0: 9746.51 38.07 1641.15 193.45 9696.37 00:14:52.640 PCIE (0000:00:12.0) NSID 1 from core 0: 9309.11 36.36 1718.24 235.05 11909.09 00:14:52.640 PCIE (0000:00:12.0) NSID 2 from core 0: 9265.31 36.19 1726.33 221.10 11475.05 00:14:52.640 PCIE (0000:00:12.0) NSID 3 from core 0: 9282.51 36.26 1723.09 222.43 12627.83 00:14:52.640 ======================================================== 00:14:52.640 Total : 55623.09 217.28 1725.26 193.45 12627.83 00:14:52.640 00:14:52.897 12:45:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75825 00:14:52.897 12:45:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75895 00:14:52.897 12:45:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:14:52.897 12:45:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75896 00:14:52.897 12:45:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:14:52.897 12:45:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:14:56.191 Initializing NVMe Controllers 00:14:56.191 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:56.191 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:56.191 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:56.191 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:56.191 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:14:56.191 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:14:56.191 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:14:56.191 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:14:56.191 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:14:56.191 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:14:56.191 Initialization complete. Launching workers. 00:14:56.191 ======================================================== 00:14:56.191 Latency(us) 00:14:56.191 Device Information : IOPS MiB/s Average min max 00:14:56.191 PCIE (0000:00:10.0) NSID 1 from core 1: 4359.13 17.03 3668.72 787.26 12246.34 00:14:56.191 PCIE (0000:00:11.0) NSID 1 from core 1: 4359.13 17.03 3670.88 816.67 13621.79 00:14:56.192 PCIE (0000:00:13.0) NSID 1 from core 1: 4359.13 17.03 3670.68 803.05 13038.40 00:14:56.192 PCIE (0000:00:12.0) NSID 1 from core 1: 4359.13 17.03 3671.11 812.71 13878.28 00:14:56.192 PCIE (0000:00:12.0) NSID 2 from core 1: 4359.13 17.03 3671.07 813.38 13592.38 00:14:56.192 PCIE (0000:00:12.0) NSID 3 from core 1: 4359.13 17.03 3671.00 816.40 13350.46 00:14:56.192 ======================================================== 00:14:56.192 Total : 26154.77 102.17 3670.57 787.26 13878.28 00:14:56.192 00:14:56.192 Initializing NVMe Controllers 00:14:56.192 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:56.192 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:56.192 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:56.192 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:56.192 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:56.192 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:14:56.192 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:14:56.192 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:14:56.192 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:14:56.192 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:14:56.192 Initialization complete. Launching workers. 00:14:56.192 ======================================================== 00:14:56.192 Latency(us) 00:14:56.192 Device Information : IOPS MiB/s Average min max 00:14:56.192 PCIE (0000:00:10.0) NSID 1 from core 0: 3881.90 15.16 4119.77 1214.46 14410.77 00:14:56.192 PCIE (0000:00:11.0) NSID 1 from core 0: 3881.90 15.16 4122.74 1123.99 12993.46 00:14:56.192 PCIE (0000:00:13.0) NSID 1 from core 0: 3881.90 15.16 4122.71 1103.93 12723.82 00:14:56.192 PCIE (0000:00:12.0) NSID 1 from core 0: 3881.90 15.16 4122.64 1248.39 12526.16 00:14:56.192 PCIE (0000:00:12.0) NSID 2 from core 0: 3881.90 15.16 4122.54 1110.89 12000.70 00:14:56.192 PCIE (0000:00:12.0) NSID 3 from core 0: 3881.90 15.16 4122.46 1104.00 13211.49 00:14:56.192 ======================================================== 00:14:56.192 Total : 23291.42 90.98 4122.14 1103.93 14410.77 00:14:56.192 00:14:58.102 Initializing NVMe Controllers 00:14:58.102 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:58.102 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:14:58.102 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:14:58.102 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:14:58.102 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:14:58.102 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:14:58.102 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:14:58.102 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:14:58.102 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:14:58.102 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:14:58.102 Initialization complete. Launching workers. 00:14:58.102 ======================================================== 00:14:58.102 Latency(us) 00:14:58.102 Device Information : IOPS MiB/s Average min max 00:14:58.102 PCIE (0000:00:10.0) NSID 1 from core 2: 2204.06 8.61 7256.17 896.03 32738.46 00:14:58.102 PCIE (0000:00:11.0) NSID 1 from core 2: 2204.06 8.61 7259.31 930.77 33746.58 00:14:58.102 PCIE (0000:00:13.0) NSID 1 from core 2: 2204.06 8.61 7258.82 910.30 28028.27 00:14:58.102 PCIE (0000:00:12.0) NSID 1 from core 2: 2204.06 8.61 7258.68 884.64 28750.94 00:14:58.102 PCIE (0000:00:12.0) NSID 2 from core 2: 2204.06 8.61 7258.88 905.58 31952.58 00:14:58.102 PCIE (0000:00:12.0) NSID 3 from core 2: 2204.06 8.61 7258.71 906.42 34125.84 00:14:58.102 ======================================================== 00:14:58.102 Total : 13224.34 51.66 7258.43 884.64 34125.84 00:14:58.102 00:14:58.102 ************************************ 00:14:58.102 END TEST nvme_multi_secondary 00:14:58.102 ************************************ 00:14:58.102 12:45:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75895 00:14:58.102 12:45:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75896 00:14:58.102 00:14:58.102 real 0m10.628s 00:14:58.102 user 0m18.280s 00:14:58.102 sys 0m0.658s 00:14:58.102 12:45:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.102 12:45:57 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:14:58.103 12:45:57 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:14:58.103 12:45:57 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:14:58.103 12:45:57 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74858 ]] 00:14:58.103 12:45:57 nvme -- common/autotest_common.sh@1094 -- # kill 74858 00:14:58.103 12:45:57 nvme -- common/autotest_common.sh@1095 -- # wait 74858 00:14:58.103 [2024-12-05 12:45:57.882197] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.882535] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.882671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.882700] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.883478] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.883523] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.883542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.883564] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.884661] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.884878] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.884901] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.884922] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.886584] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.886643] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.886665] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.103 [2024-12-05 12:45:57.886682] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75768) is not found. Dropping the request. 00:14:58.364 12:45:57 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:14:58.364 12:45:57 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:14:58.364 12:45:57 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:58.364 12:45:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.364 12:45:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.364 12:45:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.364 ************************************ 00:14:58.364 START TEST bdev_nvme_reset_stuck_adm_cmd 00:14:58.364 ************************************ 00:14:58.364 12:45:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:14:58.364 * Looking for test storage... 00:14:58.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.364 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:58.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.365 --rc genhtml_branch_coverage=1 00:14:58.365 --rc genhtml_function_coverage=1 00:14:58.365 --rc genhtml_legend=1 00:14:58.365 --rc geninfo_all_blocks=1 00:14:58.365 --rc geninfo_unexecuted_blocks=1 00:14:58.365 00:14:58.365 ' 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:58.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.365 --rc genhtml_branch_coverage=1 00:14:58.365 --rc genhtml_function_coverage=1 00:14:58.365 --rc genhtml_legend=1 00:14:58.365 --rc geninfo_all_blocks=1 00:14:58.365 --rc geninfo_unexecuted_blocks=1 00:14:58.365 00:14:58.365 ' 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:58.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.365 --rc genhtml_branch_coverage=1 00:14:58.365 --rc genhtml_function_coverage=1 00:14:58.365 --rc genhtml_legend=1 00:14:58.365 --rc geninfo_all_blocks=1 00:14:58.365 --rc geninfo_unexecuted_blocks=1 00:14:58.365 00:14:58.365 ' 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:58.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.365 --rc genhtml_branch_coverage=1 00:14:58.365 --rc genhtml_function_coverage=1 00:14:58.365 --rc genhtml_legend=1 00:14:58.365 --rc geninfo_all_blocks=1 00:14:58.365 --rc geninfo_unexecuted_blocks=1 00:14:58.365 00:14:58.365 ' 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:14:58.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76057 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76057 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 76057 ']' 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.365 12:45:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:58.626 [2024-12-05 12:45:58.283213] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:14:58.626 [2024-12-05 12:45:58.283680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76057 ] 00:14:58.626 [2024-12-05 12:45:58.449738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.886 [2024-12-05 12:45:58.478928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.886 [2024-12-05 12:45:58.479122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.886 [2024-12-05 12:45:58.480360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.886 [2024-12-05 12:45:58.480486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.454 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:59.455 nvme0n1 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_9rfLH.txt 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:14:59.455 true 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733402759 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76080 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:59.455 12:45:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:01.991 [2024-12-05 12:46:01.230334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:15:01.991 [2024-12-05 12:46:01.231145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:01.991 [2024-12-05 12:46:01.231294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.991 [2024-12-05 12:46:01.231351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.991 [2024-12-05 12:46:01.235117] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:15:01.991 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76080 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76080 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76080 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_9rfLH.txt 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_9rfLH.txt 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76057 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 76057 ']' 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 76057 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76057 00:15:01.991 killing process with pid 76057 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76057' 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 76057 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 76057 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:15:01.991 ************************************ 00:15:01.991 END TEST bdev_nvme_reset_stuck_adm_cmd 00:15:01.991 ************************************ 00:15:01.991 00:15:01.991 real 0m3.699s 00:15:01.991 user 0m13.042s 00:15:01.991 sys 0m0.577s 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.991 12:46:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:15:01.991 12:46:01 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:15:01.991 12:46:01 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:15:01.991 12:46:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.991 12:46:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.991 12:46:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.991 ************************************ 00:15:01.991 START TEST nvme_fio 00:15:01.991 ************************************ 00:15:01.991 12:46:01 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:15:01.991 12:46:01 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:01.991 12:46:01 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:15:01.991 12:46:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:15:01.991 12:46:01 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:01.991 12:46:01 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:15:01.991 12:46:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:01.991 12:46:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:01.991 12:46:01 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:01.991 12:46:01 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:01.991 12:46:01 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:01.991 12:46:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:15:01.991 12:46:01 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:15:01.991 12:46:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:01.991 12:46:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:01.991 12:46:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:02.252 12:46:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:02.252 12:46:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:02.515 12:46:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:02.515 12:46:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:02.515 12:46:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:15:02.777 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:02.777 fio-3.35 00:15:02.777 Starting 1 thread 00:15:09.441 00:15:09.441 test: (groupid=0, jobs=1): err= 0: pid=76203: Thu Dec 5 12:46:08 2024 00:15:09.441 read: IOPS=17.1k, BW=66.7MiB/s (69.9MB/s)(133MiB/2001msec) 00:15:09.441 slat (usec): min=4, max=988, avg= 6.79, stdev= 8.28 00:15:09.441 clat (usec): min=243, max=11662, avg=3716.69, stdev=1279.94 00:15:09.441 lat (usec): min=258, max=11705, avg=3723.48, stdev=1281.62 00:15:09.441 clat percentiles (usec): 00:15:09.441 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2737], 00:15:09.441 | 30.00th=[ 2868], 40.00th=[ 2999], 50.00th=[ 3195], 60.00th=[ 3458], 00:15:09.441 | 70.00th=[ 4015], 80.00th=[ 4752], 90.00th=[ 5669], 95.00th=[ 6456], 00:15:09.441 | 99.00th=[ 7701], 99.50th=[ 8160], 99.90th=[ 9372], 99.95th=[ 9765], 00:15:09.441 | 99.99th=[11338] 00:15:09.441 bw ( KiB/s): min=62112, max=69408, per=95.54%, avg=65226.33, stdev=3763.28, samples=3 00:15:09.441 iops : min=15528, max=17352, avg=16306.33, stdev=940.93, samples=3 00:15:09.441 write: IOPS=17.1k, BW=66.8MiB/s (70.0MB/s)(134MiB/2001msec); 0 zone resets 00:15:09.441 slat (usec): min=4, max=694, avg= 6.96, stdev= 5.13 00:15:09.441 clat (usec): min=609, max=11368, avg=3751.14, stdev=1274.22 00:15:09.441 lat (usec): min=615, max=11378, avg=3758.10, stdev=1275.78 00:15:09.441 clat percentiles (usec): 00:15:09.441 | 1.00th=[ 2376], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2769], 00:15:09.441 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3228], 60.00th=[ 3523], 00:15:09.441 | 70.00th=[ 4080], 80.00th=[ 4817], 90.00th=[ 5669], 95.00th=[ 6456], 00:15:09.441 | 99.00th=[ 7635], 99.50th=[ 8225], 99.90th=[ 9241], 99.95th=[ 9634], 00:15:09.441 | 99.99th=[11076] 00:15:09.441 bw ( KiB/s): min=61312, max=69088, per=95.03%, avg=64991.33, stdev=3904.76, samples=3 00:15:09.441 iops : min=15328, max=17272, avg=16247.67, stdev=976.22, samples=3 00:15:09.441 lat (usec) : 250=0.01%, 750=0.01% 00:15:09.441 lat (msec) : 2=0.14%, 4=69.14%, 10=30.69%, 20=0.02% 00:15:09.441 cpu : usr=97.80%, sys=0.45%, ctx=45, majf=0, minf=626 00:15:09.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.441 issued rwts: total=34153,34213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.441 00:15:09.441 Run status group 0 (all jobs): 00:15:09.441 READ: bw=66.7MiB/s (69.9MB/s), 66.7MiB/s-66.7MiB/s (69.9MB/s-69.9MB/s), io=133MiB (140MB), run=2001-2001msec 00:15:09.441 WRITE: bw=66.8MiB/s (70.0MB/s), 66.8MiB/s-66.8MiB/s (70.0MB/s-70.0MB/s), io=134MiB (140MB), run=2001-2001msec 00:15:09.441 ----------------------------------------------------- 00:15:09.441 Suppressions used: 00:15:09.441 count bytes template 00:15:09.441 1 32 /usr/src/fio/parse.c 00:15:09.441 1 8 libtcmalloc_minimal.so 00:15:09.441 ----------------------------------------------------- 00:15:09.441 00:15:09.441 12:46:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:09.441 12:46:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:09.441 12:46:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:09.441 12:46:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:09.441 12:46:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:15:09.441 12:46:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:09.441 12:46:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:09.441 12:46:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:09.441 12:46:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:15:09.441 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:09.441 fio-3.35 00:15:09.441 Starting 1 thread 00:15:16.028 00:15:16.028 test: (groupid=0, jobs=1): err= 0: pid=76264: Thu Dec 5 12:46:15 2024 00:15:16.028 read: IOPS=17.4k, BW=67.8MiB/s (71.1MB/s)(136MiB/2001msec) 00:15:16.028 slat (nsec): min=4826, max=72551, avg=6447.23, stdev=2902.01 00:15:16.028 clat (usec): min=277, max=31081, avg=3613.62, stdev=1480.29 00:15:16.028 lat (usec): min=283, max=31086, avg=3620.07, stdev=1481.29 00:15:16.028 clat percentiles (usec): 00:15:16.028 | 1.00th=[ 2089], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2737], 00:15:16.028 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 3097], 60.00th=[ 3359], 00:15:16.028 | 70.00th=[ 3851], 80.00th=[ 4490], 90.00th=[ 5276], 95.00th=[ 5932], 00:15:16.028 | 99.00th=[ 7308], 99.50th=[ 8094], 99.90th=[20317], 99.95th=[26084], 00:15:16.028 | 99.99th=[27657] 00:15:16.028 bw ( KiB/s): min=70152, max=72776, per=100.00%, avg=71509.33, stdev=1314.35, samples=3 00:15:16.028 iops : min=17538, max=18194, avg=17877.33, stdev=328.59, samples=3 00:15:16.028 write: IOPS=17.4k, BW=67.8MiB/s (71.1MB/s)(136MiB/2001msec); 0 zone resets 00:15:16.028 slat (nsec): min=4877, max=81958, avg=6685.91, stdev=2942.19 00:15:16.028 clat (usec): min=221, max=38604, avg=3732.26, stdev=2148.69 00:15:16.028 lat (usec): min=227, max=38609, avg=3738.94, stdev=2149.31 00:15:16.028 clat percentiles (usec): 00:15:16.028 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2737], 00:15:16.028 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3130], 60.00th=[ 3392], 00:15:16.028 | 70.00th=[ 3884], 80.00th=[ 4555], 90.00th=[ 5342], 95.00th=[ 6063], 00:15:16.028 | 99.00th=[ 7898], 99.50th=[19006], 99.90th=[33424], 99.95th=[35914], 00:15:16.028 | 99.99th=[38011] 00:15:16.028 bw ( KiB/s): min=69608, max=73224, per=100.00%, avg=71445.33, stdev=1808.71, samples=3 00:15:16.028 iops : min=17402, max=18306, avg=17861.33, stdev=452.18, samples=3 00:15:16.028 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:15:16.028 lat (msec) : 2=0.58%, 4=71.59%, 10=27.18%, 20=0.31%, 50=0.29% 00:15:16.028 cpu : usr=98.85%, sys=0.05%, ctx=2, majf=0, minf=626 00:15:16.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:16.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.028 issued rwts: total=34727,34751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.028 00:15:16.028 Run status group 0 (all jobs): 00:15:16.028 READ: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=136MiB (142MB), run=2001-2001msec 00:15:16.028 WRITE: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=136MiB (142MB), run=2001-2001msec 00:15:16.028 ----------------------------------------------------- 00:15:16.028 Suppressions used: 00:15:16.028 count bytes template 00:15:16.028 1 32 /usr/src/fio/parse.c 00:15:16.028 1 8 libtcmalloc_minimal.so 00:15:16.028 ----------------------------------------------------- 00:15:16.028 00:15:16.028 12:46:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:16.028 12:46:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:16.028 12:46:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:16.028 12:46:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:16.028 12:46:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:16.028 12:46:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:15:16.289 12:46:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:16.289 12:46:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:16.289 12:46:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:15:16.550 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:16.550 fio-3.35 00:15:16.550 Starting 1 thread 00:15:23.131 00:15:23.131 test: (groupid=0, jobs=1): err= 0: pid=76325: Thu Dec 5 12:46:22 2024 00:15:23.131 read: IOPS=19.4k, BW=75.8MiB/s (79.5MB/s)(152MiB/2001msec) 00:15:23.131 slat (usec): min=4, max=144, avg= 6.04, stdev= 2.62 00:15:23.131 clat (usec): min=694, max=11169, avg=3285.31, stdev=1027.65 00:15:23.131 lat (usec): min=708, max=11184, avg=3291.35, stdev=1028.73 00:15:23.131 clat percentiles (usec): 00:15:23.131 | 1.00th=[ 2343], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:15:23.131 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 3032], 00:15:23.131 | 70.00th=[ 3228], 80.00th=[ 3654], 90.00th=[ 4752], 95.00th=[ 5669], 00:15:23.131 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[ 9503], 99.95th=[ 9896], 00:15:23.131 | 99.99th=[10552] 00:15:23.131 bw ( KiB/s): min=68232, max=85245, per=98.57%, avg=76500.33, stdev=8516.50, samples=3 00:15:23.131 iops : min=17058, max=21311, avg=19125.00, stdev=2129.00, samples=3 00:15:23.131 write: IOPS=19.4k, BW=75.6MiB/s (79.3MB/s)(151MiB/2001msec); 0 zone resets 00:15:23.131 slat (nsec): min=4878, max=72294, avg=6284.02, stdev=2560.56 00:15:23.131 clat (usec): min=741, max=11313, avg=3298.34, stdev=1023.94 00:15:23.131 lat (usec): min=755, max=11318, avg=3304.62, stdev=1025.03 00:15:23.131 clat percentiles (usec): 00:15:23.131 | 1.00th=[ 2376], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2671], 00:15:23.131 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 3032], 00:15:23.131 | 70.00th=[ 3228], 80.00th=[ 3654], 90.00th=[ 4686], 95.00th=[ 5669], 00:15:23.131 | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[ 9503], 99.95th=[ 9765], 00:15:23.131 | 99.99th=[10421] 00:15:23.131 bw ( KiB/s): min=68608, max=84950, per=98.97%, avg=76647.33, stdev=8174.18, samples=3 00:15:23.131 iops : min=17152, max=21237, avg=19161.67, stdev=2043.29, samples=3 00:15:23.131 lat (usec) : 750=0.01%, 1000=0.01% 00:15:23.131 lat (msec) : 2=0.15%, 4=83.92%, 10=15.89%, 20=0.04% 00:15:23.131 cpu : usr=98.80%, sys=0.10%, ctx=21, majf=0, minf=626 00:15:23.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:23.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.131 issued rwts: total=38823,38742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.131 00:15:23.131 Run status group 0 (all jobs): 00:15:23.131 READ: bw=75.8MiB/s (79.5MB/s), 75.8MiB/s-75.8MiB/s (79.5MB/s-79.5MB/s), io=152MiB (159MB), run=2001-2001msec 00:15:23.131 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=151MiB (159MB), run=2001-2001msec 00:15:23.131 ----------------------------------------------------- 00:15:23.131 Suppressions used: 00:15:23.131 count bytes template 00:15:23.131 1 32 /usr/src/fio/parse.c 00:15:23.131 1 8 libtcmalloc_minimal.so 00:15:23.131 ----------------------------------------------------- 00:15:23.131 00:15:23.131 12:46:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:23.131 12:46:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:15:23.131 12:46:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:23.131 12:46:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:15:23.131 12:46:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:15:23.131 12:46:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:15:23.391 12:46:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:15:23.391 12:46:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:23.391 12:46:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:15:23.391 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:23.391 fio-3.35 00:15:23.391 Starting 1 thread 00:15:28.678 00:15:28.678 test: (groupid=0, jobs=1): err= 0: pid=76388: Thu Dec 5 12:46:28 2024 00:15:28.678 read: IOPS=18.2k, BW=71.1MiB/s (74.6MB/s)(142MiB/2001msec) 00:15:28.678 slat (usec): min=4, max=114, avg= 6.30, stdev= 3.00 00:15:28.678 clat (usec): min=348, max=13451, avg=3504.90, stdev=1124.18 00:15:28.678 lat (usec): min=353, max=13458, avg=3511.21, stdev=1125.45 00:15:28.678 clat percentiles (usec): 00:15:28.678 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2671], 00:15:28.678 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3261], 00:15:28.678 | 70.00th=[ 3687], 80.00th=[ 4359], 90.00th=[ 5211], 95.00th=[ 5866], 00:15:28.678 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[ 8356], 99.95th=[ 8586], 00:15:28.678 | 99.99th=[ 9896] 00:15:28.678 bw ( KiB/s): min=63712, max=77928, per=97.06%, avg=70714.67, stdev=7110.34, samples=3 00:15:28.678 iops : min=15928, max=19482, avg=17678.67, stdev=1777.59, samples=3 00:15:28.678 write: IOPS=18.2k, BW=71.2MiB/s (74.7MB/s)(143MiB/2001msec); 0 zone resets 00:15:28.678 slat (nsec): min=4869, max=73293, avg=6546.17, stdev=2943.61 00:15:28.678 clat (usec): min=357, max=10544, avg=3494.35, stdev=1109.71 00:15:28.678 lat (usec): min=362, max=10554, avg=3500.90, stdev=1110.95 00:15:28.678 clat percentiles (usec): 00:15:28.678 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2704], 00:15:28.678 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3261], 00:15:28.678 | 70.00th=[ 3654], 80.00th=[ 4359], 90.00th=[ 5145], 95.00th=[ 5800], 00:15:28.678 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[ 8291], 99.95th=[ 8586], 00:15:28.678 | 99.99th=[10028] 00:15:28.678 bw ( KiB/s): min=63496, max=77752, per=96.92%, avg=70677.33, stdev=7128.60, samples=3 00:15:28.678 iops : min=15874, max=19438, avg=17669.33, stdev=1782.15, samples=3 00:15:28.678 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:15:28.678 lat (msec) : 2=0.32%, 4=74.83%, 10=24.79%, 20=0.01% 00:15:28.678 cpu : usr=98.45%, sys=0.45%, ctx=14, majf=0, minf=623 00:15:28.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:28.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:28.678 issued rwts: total=36445,36481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.678 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:28.678 00:15:28.678 Run status group 0 (all jobs): 00:15:28.678 READ: bw=71.1MiB/s (74.6MB/s), 71.1MiB/s-71.1MiB/s (74.6MB/s-74.6MB/s), io=142MiB (149MB), run=2001-2001msec 00:15:28.678 WRITE: bw=71.2MiB/s (74.7MB/s), 71.2MiB/s-71.2MiB/s (74.7MB/s-74.7MB/s), io=143MiB (149MB), run=2001-2001msec 00:15:28.940 ----------------------------------------------------- 00:15:28.940 Suppressions used: 00:15:28.940 count bytes template 00:15:28.940 1 32 /usr/src/fio/parse.c 00:15:28.940 1 8 libtcmalloc_minimal.so 00:15:28.940 ----------------------------------------------------- 00:15:28.940 00:15:28.940 12:46:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:15:28.940 12:46:28 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:15:28.940 00:15:28.940 real 0m26.961s 00:15:28.940 user 0m16.342s 00:15:28.940 sys 0m19.313s 00:15:28.940 ************************************ 00:15:28.940 END TEST nvme_fio 00:15:28.940 ************************************ 00:15:28.940 12:46:28 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.940 12:46:28 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:15:28.940 ************************************ 00:15:28.940 END TEST nvme 00:15:28.940 ************************************ 00:15:28.940 00:15:28.940 real 1m34.586s 00:15:28.940 user 3m31.460s 00:15:28.940 sys 0m29.844s 00:15:28.940 12:46:28 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.940 12:46:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.201 12:46:28 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:15:29.201 12:46:28 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:29.201 12:46:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:29.201 12:46:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.201 12:46:28 -- common/autotest_common.sh@10 -- # set +x 00:15:29.201 ************************************ 00:15:29.201 START TEST nvme_scc 00:15:29.201 ************************************ 00:15:29.201 12:46:28 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:15:29.201 * Looking for test storage... 00:15:29.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:29.201 12:46:28 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:29.201 12:46:28 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:29.202 12:46:28 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:29.202 12:46:28 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@345 -- # : 1 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@368 -- # return 0 00:15:29.202 12:46:28 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.202 12:46:28 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:29.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.202 --rc genhtml_branch_coverage=1 00:15:29.202 --rc genhtml_function_coverage=1 00:15:29.202 --rc genhtml_legend=1 00:15:29.202 --rc geninfo_all_blocks=1 00:15:29.202 --rc geninfo_unexecuted_blocks=1 00:15:29.202 00:15:29.202 ' 00:15:29.202 12:46:28 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:29.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.202 --rc genhtml_branch_coverage=1 00:15:29.202 --rc genhtml_function_coverage=1 00:15:29.202 --rc genhtml_legend=1 00:15:29.202 --rc geninfo_all_blocks=1 00:15:29.202 --rc geninfo_unexecuted_blocks=1 00:15:29.202 00:15:29.202 ' 00:15:29.202 12:46:28 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:29.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.202 --rc genhtml_branch_coverage=1 00:15:29.202 --rc genhtml_function_coverage=1 00:15:29.202 --rc genhtml_legend=1 00:15:29.202 --rc geninfo_all_blocks=1 00:15:29.202 --rc geninfo_unexecuted_blocks=1 00:15:29.202 00:15:29.202 ' 00:15:29.202 12:46:28 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:29.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.202 --rc genhtml_branch_coverage=1 00:15:29.202 --rc genhtml_function_coverage=1 00:15:29.202 --rc genhtml_legend=1 00:15:29.202 --rc geninfo_all_blocks=1 00:15:29.202 --rc geninfo_unexecuted_blocks=1 00:15:29.202 00:15:29.202 ' 00:15:29.202 12:46:28 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.202 12:46:28 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.202 12:46:28 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.202 12:46:28 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.202 12:46:28 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.202 12:46:28 nvme_scc -- paths/export.sh@5 -- # export PATH 00:15:29.202 12:46:28 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:29.202 12:46:28 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:15:29.202 12:46:28 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.202 12:46:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:15:29.202 12:46:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:15:29.202 12:46:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:15:29.202 12:46:28 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:29.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:29.725 Waiting for block devices as requested 00:15:29.725 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.986 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.986 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.986 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:35.366 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:35.366 12:46:34 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:35.366 12:46:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:35.366 12:46:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:35.366 12:46:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:35.366 12:46:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:35.366 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:35.367 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.368 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.369 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.370 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.371 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:35.372 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:35.373 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.374 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.375 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.376 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.377 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.381 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.381 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.382 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.383 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:35.384 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.385 12:46:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.386 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.387 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.388 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.389 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.390 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:35.391 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:35.392 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:35.393 12:46:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:35.394 12:46:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:35.394 12:46:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:35.394 12:46:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:35.394 12:46:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:35.394 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:35.395 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:35.396 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:35.400 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:35.401 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.402 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:35.403 12:46:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:35.403 12:46:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:35.403 12:46:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:35.403 12:46:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.403 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:35.404 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.405 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.406 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.671 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:35.672 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.673 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:35.674 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:35.675 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:35.676 12:46:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:35.677 12:46:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:15:35.677 12:46:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:35.677 12:46:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:35.677 12:46:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.677 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:35.678 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:35.679 12:46:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:35.680 12:46:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:15:35.680 12:46:35 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:15:35.680 12:46:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:15:35.680 12:46:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:15:35.680 12:46:35 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:36.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:36.824 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:36.824 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:36.824 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:36.824 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:36.824 12:46:36 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:36.824 12:46:36 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:36.824 12:46:36 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.824 12:46:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:36.824 ************************************ 00:15:36.824 START TEST nvme_simple_copy 00:15:36.824 ************************************ 00:15:36.824 12:46:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:15:37.139 Initializing NVMe Controllers 00:15:37.139 Attaching to 0000:00:10.0 00:15:37.139 Controller supports SCC. Attached to 0000:00:10.0 00:15:37.139 Namespace ID: 1 size: 6GB 00:15:37.139 Initialization complete. 00:15:37.139 00:15:37.139 Controller QEMU NVMe Ctrl (12340 ) 00:15:37.139 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:15:37.139 Namespace Block Size:4096 00:15:37.139 Writing LBAs 0 to 63 with Random Data 00:15:37.139 Copied LBAs from 0 - 63 to the Destination LBA 256 00:15:37.139 LBAs matching Written Data: 64 00:15:37.139 00:15:37.139 real 0m0.271s 00:15:37.139 user 0m0.103s 00:15:37.139 sys 0m0.065s 00:15:37.139 12:46:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.139 ************************************ 00:15:37.139 END TEST nvme_simple_copy 00:15:37.139 ************************************ 00:15:37.139 12:46:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:15:37.139 00:15:37.139 real 0m8.114s 00:15:37.139 user 0m1.237s 00:15:37.139 sys 0m1.498s 00:15:37.139 ************************************ 00:15:37.139 END TEST nvme_scc 00:15:37.139 ************************************ 00:15:37.139 12:46:36 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.139 12:46:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:15:37.405 12:46:36 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:15:37.405 12:46:36 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:15:37.405 12:46:36 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:15:37.405 12:46:36 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:15:37.405 12:46:36 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:15:37.405 12:46:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:37.405 12:46:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.405 12:46:36 -- common/autotest_common.sh@10 -- # set +x 00:15:37.405 ************************************ 00:15:37.405 START TEST nvme_fdp 00:15:37.405 ************************************ 00:15:37.405 12:46:36 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:15:37.405 * Looking for test storage... 00:15:37.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:37.405 12:46:37 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:37.405 12:46:37 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:37.405 12:46:37 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:15:37.405 12:46:37 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:15:37.406 12:46:37 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.406 12:46:37 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.406 --rc genhtml_branch_coverage=1 00:15:37.406 --rc genhtml_function_coverage=1 00:15:37.406 --rc genhtml_legend=1 00:15:37.406 --rc geninfo_all_blocks=1 00:15:37.406 --rc geninfo_unexecuted_blocks=1 00:15:37.406 00:15:37.406 ' 00:15:37.406 12:46:37 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.406 --rc genhtml_branch_coverage=1 00:15:37.406 --rc genhtml_function_coverage=1 00:15:37.406 --rc genhtml_legend=1 00:15:37.406 --rc geninfo_all_blocks=1 00:15:37.406 --rc geninfo_unexecuted_blocks=1 00:15:37.406 00:15:37.406 ' 00:15:37.406 12:46:37 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.406 --rc genhtml_branch_coverage=1 00:15:37.406 --rc genhtml_function_coverage=1 00:15:37.406 --rc genhtml_legend=1 00:15:37.406 --rc geninfo_all_blocks=1 00:15:37.406 --rc geninfo_unexecuted_blocks=1 00:15:37.406 00:15:37.406 ' 00:15:37.406 12:46:37 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:37.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.406 --rc genhtml_branch_coverage=1 00:15:37.406 --rc genhtml_function_coverage=1 00:15:37.406 --rc genhtml_legend=1 00:15:37.406 --rc geninfo_all_blocks=1 00:15:37.406 --rc geninfo_unexecuted_blocks=1 00:15:37.406 00:15:37.406 ' 00:15:37.406 12:46:37 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.406 12:46:37 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.406 12:46:37 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.406 12:46:37 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.406 12:46:37 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.406 12:46:37 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:15:37.406 12:46:37 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:15:37.406 12:46:37 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:15:37.406 12:46:37 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.406 12:46:37 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:37.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:37.926 Waiting for block devices as requested 00:15:37.926 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:37.926 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:38.185 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:38.185 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:43.486 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:43.486 12:46:42 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:15:43.486 12:46:42 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:43.486 12:46:42 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:43.486 12:46:42 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:43.486 12:46:42 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:43.486 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:15:43.487 12:46:42 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.487 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.488 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.489 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.490 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.491 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:15:43.492 12:46:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:15:43.492 12:46:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:43.492 12:46:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:43.492 12:46:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:43.492 12:46:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:15:43.493 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:43.494 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:43.495 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:15:43.496 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.497 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:15:43.498 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:15:43.499 12:46:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:43.499 12:46:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:43.499 12:46:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:43.499 12:46:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.499 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.500 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.501 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:15:43.502 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.503 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:15:43.504 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:15:43.505 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.506 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.507 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:15:43.508 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:15:43.509 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:15:43.510 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:15:43.511 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:15:43.512 12:46:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:15:43.512 12:46:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:43.512 12:46:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:43.512 12:46:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.512 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:15:43.513 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.514 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.515 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:15:43.777 12:46:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:15:43.777 12:46:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:15:43.777 12:46:43 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:15:43.777 12:46:43 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:44.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:44.610 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.610 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.610 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.610 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.924 12:46:44 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:44.924 12:46:44 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:44.924 12:46:44 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.924 12:46:44 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:44.924 ************************************ 00:15:44.924 START TEST nvme_flexible_data_placement 00:15:44.924 ************************************ 00:15:44.924 12:46:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:15:44.924 Initializing NVMe Controllers 00:15:44.924 Attaching to 0000:00:13.0 00:15:44.924 Controller supports FDP Attached to 0000:00:13.0 00:15:44.924 Namespace ID: 1 Endurance Group ID: 1 00:15:44.924 Initialization complete. 00:15:44.924 00:15:44.924 ================================== 00:15:44.924 == FDP tests for Namespace: #01 == 00:15:44.924 ================================== 00:15:44.924 00:15:44.924 Get Feature: FDP: 00:15:44.924 ================= 00:15:44.924 Enabled: Yes 00:15:44.924 FDP configuration Index: 0 00:15:44.924 00:15:44.924 FDP configurations log page 00:15:44.924 =========================== 00:15:44.924 Number of FDP configurations: 1 00:15:44.924 Version: 0 00:15:44.924 Size: 112 00:15:44.924 FDP Configuration Descriptor: 0 00:15:44.924 Descriptor Size: 96 00:15:44.924 Reclaim Group Identifier format: 2 00:15:44.924 FDP Volatile Write Cache: Not Present 00:15:44.924 FDP Configuration: Valid 00:15:44.924 Vendor Specific Size: 0 00:15:44.924 Number of Reclaim Groups: 2 00:15:44.924 Number of Recalim Unit Handles: 8 00:15:44.924 Max Placement Identifiers: 128 00:15:44.924 Number of Namespaces Suppprted: 256 00:15:44.924 Reclaim unit Nominal Size: 6000000 bytes 00:15:44.924 Estimated Reclaim Unit Time Limit: Not Reported 00:15:44.924 RUH Desc #000: RUH Type: Initially Isolated 00:15:44.924 RUH Desc #001: RUH Type: Initially Isolated 00:15:44.924 RUH Desc #002: RUH Type: Initially Isolated 00:15:44.924 RUH Desc #003: RUH Type: Initially Isolated 00:15:44.924 RUH Desc #004: RUH Type: Initially Isolated 00:15:44.924 RUH Desc #005: RUH Type: Initially Isolated 00:15:44.924 RUH Desc #006: RUH Type: Initially Isolated 00:15:44.924 RUH Desc #007: RUH Type: Initially Isolated 00:15:44.924 00:15:44.924 FDP reclaim unit handle usage log page 00:15:44.924 ====================================== 00:15:44.924 Number of Reclaim Unit Handles: 8 00:15:44.924 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:15:44.924 RUH Usage Desc #001: RUH Attributes: Unused 00:15:44.924 RUH Usage Desc #002: RUH Attributes: Unused 00:15:44.924 RUH Usage Desc #003: RUH Attributes: Unused 00:15:44.924 RUH Usage Desc #004: RUH Attributes: Unused 00:15:44.924 RUH Usage Desc #005: RUH Attributes: Unused 00:15:44.924 RUH Usage Desc #006: RUH Attributes: Unused 00:15:44.924 RUH Usage Desc #007: RUH Attributes: Unused 00:15:44.924 00:15:44.924 FDP statistics log page 00:15:44.924 ======================= 00:15:44.924 Host bytes with metadata written: 1890959360 00:15:44.924 Media bytes with metadata written: 1891270656 00:15:44.924 Media bytes erased: 0 00:15:44.924 00:15:44.924 FDP Reclaim unit handle status 00:15:44.924 ============================== 00:15:44.924 Number of RUHS descriptors: 2 00:15:44.924 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000014a4 00:15:44.924 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:15:44.924 00:15:44.924 FDP write on placement id: 0 success 00:15:44.924 00:15:44.924 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:15:44.924 00:15:44.924 IO mgmt send: RUH update for Placement ID: #0 Success 00:15:44.924 00:15:44.924 Get Feature: FDP Events for Placement handle: #0 00:15:44.924 ======================== 00:15:44.924 Number of FDP Events: 6 00:15:44.924 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:15:44.924 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:15:44.924 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:15:44.924 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:15:44.924 FDP Event: #4 Type: Media Reallocated Enabled: No 00:15:44.924 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:15:44.924 00:15:44.924 FDP events log page 00:15:44.924 =================== 00:15:44.924 Number of FDP events: 1 00:15:44.924 FDP Event #0: 00:15:44.924 Event Type: RU Not Written to Capacity 00:15:44.924 Placement Identifier: Valid 00:15:44.924 NSID: Valid 00:15:44.924 Location: Valid 00:15:44.924 Placement Identifier: 0 00:15:44.924 Event Timestamp: 4 00:15:44.924 Namespace Identifier: 1 00:15:44.924 Reclaim Group Identifier: 0 00:15:44.924 Reclaim Unit Handle Identifier: 0 00:15:44.924 00:15:44.924 FDP test passed 00:15:45.185 00:15:45.185 real 0m0.237s 00:15:45.185 user 0m0.073s 00:15:45.185 sys 0m0.063s 00:15:45.185 ************************************ 00:15:45.185 12:46:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.185 12:46:44 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:15:45.185 END TEST nvme_flexible_data_placement 00:15:45.185 ************************************ 00:15:45.185 00:15:45.185 real 0m7.847s 00:15:45.185 user 0m1.099s 00:15:45.185 sys 0m1.479s 00:15:45.185 12:46:44 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.185 ************************************ 00:15:45.185 12:46:44 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:15:45.185 END TEST nvme_fdp 00:15:45.185 ************************************ 00:15:45.185 12:46:44 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:15:45.185 12:46:44 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:45.185 12:46:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:45.185 12:46:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.185 12:46:44 -- common/autotest_common.sh@10 -- # set +x 00:15:45.185 ************************************ 00:15:45.185 START TEST nvme_rpc 00:15:45.185 ************************************ 00:15:45.185 12:46:44 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:15:45.185 * Looking for test storage... 00:15:45.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:45.185 12:46:44 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:45.185 12:46:44 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:45.185 12:46:44 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:45.185 12:46:45 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:45.185 12:46:45 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.185 12:46:45 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.185 12:46:45 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:15:45.467 12:46:45 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.468 12:46:45 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:45.468 12:46:45 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:15:45.468 12:46:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.468 12:46:45 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:15:45.468 12:46:45 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.468 12:46:45 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.468 12:46:45 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.468 12:46:45 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:15:45.468 12:46:45 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.468 12:46:45 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:45.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.468 --rc genhtml_branch_coverage=1 00:15:45.468 --rc genhtml_function_coverage=1 00:15:45.468 --rc genhtml_legend=1 00:15:45.468 --rc geninfo_all_blocks=1 00:15:45.468 --rc geninfo_unexecuted_blocks=1 00:15:45.468 00:15:45.468 ' 00:15:45.468 12:46:45 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:45.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.468 --rc genhtml_branch_coverage=1 00:15:45.468 --rc genhtml_function_coverage=1 00:15:45.468 --rc genhtml_legend=1 00:15:45.468 --rc geninfo_all_blocks=1 00:15:45.468 --rc geninfo_unexecuted_blocks=1 00:15:45.468 00:15:45.468 ' 00:15:45.468 12:46:45 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:45.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.468 --rc genhtml_branch_coverage=1 00:15:45.469 --rc genhtml_function_coverage=1 00:15:45.469 --rc genhtml_legend=1 00:15:45.469 --rc geninfo_all_blocks=1 00:15:45.469 --rc geninfo_unexecuted_blocks=1 00:15:45.469 00:15:45.469 ' 00:15:45.469 12:46:45 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:45.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.469 --rc genhtml_branch_coverage=1 00:15:45.469 --rc genhtml_function_coverage=1 00:15:45.469 --rc genhtml_legend=1 00:15:45.469 --rc geninfo_all_blocks=1 00:15:45.469 --rc geninfo_unexecuted_blocks=1 00:15:45.469 00:15:45.469 ' 00:15:45.469 12:46:45 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.469 12:46:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:15:45.469 12:46:45 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:45.469 12:46:45 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:15:45.470 12:46:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:15:45.470 12:46:45 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77772 00:15:45.470 12:46:45 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:15:45.470 12:46:45 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77772 00:15:45.470 12:46:45 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77772 ']' 00:15:45.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.470 12:46:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.471 [2024-12-05 12:46:45.186758] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:15:45.471 [2024-12-05 12:46:45.186904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77772 ] 00:15:45.733 [2024-12-05 12:46:45.346726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:45.733 [2024-12-05 12:46:45.372730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.733 [2024-12-05 12:46:45.372778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.304 12:46:46 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.304 12:46:46 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:46.304 12:46:46 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:15:46.565 Nvme0n1 00:15:46.565 12:46:46 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:15:46.565 12:46:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:15:46.825 request: 00:15:46.825 { 00:15:46.825 "bdev_name": "Nvme0n1", 00:15:46.825 "filename": "non_existing_file", 00:15:46.825 "method": "bdev_nvme_apply_firmware", 00:15:46.825 "req_id": 1 00:15:46.825 } 00:15:46.825 Got JSON-RPC error response 00:15:46.825 response: 00:15:46.825 { 00:15:46.825 "code": -32603, 00:15:46.825 "message": "open file failed." 00:15:46.825 } 00:15:46.825 12:46:46 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:15:46.825 12:46:46 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:15:46.825 12:46:46 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:47.086 12:46:46 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:47.086 12:46:46 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77772 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77772 ']' 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77772 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77772 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.086 killing process with pid 77772 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77772' 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77772 00:15:47.086 12:46:46 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77772 00:15:47.348 00:15:47.348 real 0m2.154s 00:15:47.348 user 0m4.125s 00:15:47.348 sys 0m0.541s 00:15:47.348 12:46:47 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.348 ************************************ 00:15:47.348 END TEST nvme_rpc 00:15:47.348 12:46:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.348 ************************************ 00:15:47.348 12:46:47 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:47.348 12:46:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:47.348 12:46:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.348 12:46:47 -- common/autotest_common.sh@10 -- # set +x 00:15:47.348 ************************************ 00:15:47.348 START TEST nvme_rpc_timeouts 00:15:47.348 ************************************ 00:15:47.348 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:47.348 * Looking for test storage... 00:15:47.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:47.348 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:47.348 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:15:47.348 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.609 12:46:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:47.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.609 --rc genhtml_branch_coverage=1 00:15:47.609 --rc genhtml_function_coverage=1 00:15:47.609 --rc genhtml_legend=1 00:15:47.609 --rc geninfo_all_blocks=1 00:15:47.609 --rc geninfo_unexecuted_blocks=1 00:15:47.609 00:15:47.609 ' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:47.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.609 --rc genhtml_branch_coverage=1 00:15:47.609 --rc genhtml_function_coverage=1 00:15:47.609 --rc genhtml_legend=1 00:15:47.609 --rc geninfo_all_blocks=1 00:15:47.609 --rc geninfo_unexecuted_blocks=1 00:15:47.609 00:15:47.609 ' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:47.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.609 --rc genhtml_branch_coverage=1 00:15:47.609 --rc genhtml_function_coverage=1 00:15:47.609 --rc genhtml_legend=1 00:15:47.609 --rc geninfo_all_blocks=1 00:15:47.609 --rc geninfo_unexecuted_blocks=1 00:15:47.609 00:15:47.609 ' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:47.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.609 --rc genhtml_branch_coverage=1 00:15:47.609 --rc genhtml_function_coverage=1 00:15:47.609 --rc genhtml_legend=1 00:15:47.609 --rc geninfo_all_blocks=1 00:15:47.609 --rc geninfo_unexecuted_blocks=1 00:15:47.609 00:15:47.609 ' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:47.609 12:46:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77826 00:15:47.609 12:46:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77826 00:15:47.609 12:46:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77858 00:15:47.609 12:46:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:15:47.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.609 12:46:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77858 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77858 ']' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.609 12:46:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:47.609 12:46:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:15:47.609 [2024-12-05 12:46:47.340130] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:15:47.609 [2024-12-05 12:46:47.340268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77858 ] 00:15:47.868 [2024-12-05 12:46:47.499933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.868 [2024-12-05 12:46:47.526379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.868 [2024-12-05 12:46:47.526420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.461 12:46:48 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.461 12:46:48 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:15:48.461 Checking default timeout settings: 00:15:48.461 12:46:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:15:48.461 12:46:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:48.740 Making settings changes with rpc: 00:15:48.740 12:46:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:15:48.740 12:46:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:15:48.998 Check default vs. modified settings: 00:15:48.998 12:46:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:15:48.998 12:46:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77826 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77826 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:15:49.256 Setting action_on_timeout is changed as expected. 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77826 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77826 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:15:49.256 Setting timeout_us is changed as expected. 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77826 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77826 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:15:49.256 Setting timeout_admin_us is changed as expected. 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77826 /tmp/settings_modified_77826 00:15:49.256 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77858 00:15:49.256 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77858 ']' 00:15:49.256 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77858 00:15:49.257 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:15:49.257 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.257 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77858 00:15:49.514 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.514 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.514 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77858' 00:15:49.514 killing process with pid 77858 00:15:49.514 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77858 00:15:49.514 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77858 00:15:49.772 RPC TIMEOUT SETTING TEST PASSED. 00:15:49.772 12:46:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:15:49.772 00:15:49.772 real 0m2.324s 00:15:49.772 user 0m4.615s 00:15:49.772 sys 0m0.510s 00:15:49.772 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.772 12:46:49 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:15:49.772 ************************************ 00:15:49.772 END TEST nvme_rpc_timeouts 00:15:49.772 ************************************ 00:15:49.772 12:46:49 -- spdk/autotest.sh@239 -- # uname -s 00:15:49.772 12:46:49 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:15:49.772 12:46:49 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:49.772 12:46:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:49.772 12:46:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.772 12:46:49 -- common/autotest_common.sh@10 -- # set +x 00:15:49.772 ************************************ 00:15:49.772 START TEST sw_hotplug 00:15:49.772 ************************************ 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:15:49.772 * Looking for test storage... 00:15:49.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.772 12:46:49 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:49.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.772 --rc genhtml_branch_coverage=1 00:15:49.772 --rc genhtml_function_coverage=1 00:15:49.772 --rc genhtml_legend=1 00:15:49.772 --rc geninfo_all_blocks=1 00:15:49.772 --rc geninfo_unexecuted_blocks=1 00:15:49.772 00:15:49.772 ' 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:49.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.772 --rc genhtml_branch_coverage=1 00:15:49.772 --rc genhtml_function_coverage=1 00:15:49.772 --rc genhtml_legend=1 00:15:49.772 --rc geninfo_all_blocks=1 00:15:49.772 --rc geninfo_unexecuted_blocks=1 00:15:49.772 00:15:49.772 ' 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:49.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.772 --rc genhtml_branch_coverage=1 00:15:49.772 --rc genhtml_function_coverage=1 00:15:49.772 --rc genhtml_legend=1 00:15:49.772 --rc geninfo_all_blocks=1 00:15:49.772 --rc geninfo_unexecuted_blocks=1 00:15:49.772 00:15:49.772 ' 00:15:49.772 12:46:49 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:49.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.772 --rc genhtml_branch_coverage=1 00:15:49.772 --rc genhtml_function_coverage=1 00:15:49.772 --rc genhtml_legend=1 00:15:49.772 --rc geninfo_all_blocks=1 00:15:49.772 --rc geninfo_unexecuted_blocks=1 00:15:49.772 00:15:49.772 ' 00:15:49.772 12:46:49 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:50.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.338 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:50.338 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:50.338 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:50.338 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:50.338 12:46:50 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:15:50.338 12:46:50 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:15:50.338 12:46:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:15:50.338 12:46:50 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@233 -- # local class 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@18 -- # local i 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:15:50.338 12:46:50 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:15:50.339 12:46:50 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:15:50.339 12:46:50 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:15:50.339 12:46:50 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:15:50.339 12:46:50 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:15:50.339 12:46:50 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:15:50.339 12:46:50 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:15:50.339 12:46:50 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:50.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.854 Waiting for block devices as requested 00:15:50.854 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:50.854 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:50.854 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:51.111 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:56.390 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:56.390 12:46:55 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:15:56.390 12:46:55 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:56.390 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:15:56.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:56.390 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:15:56.647 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:15:56.904 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:56.904 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:56.904 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:15:56.904 12:46:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78698 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:15:57.161 12:46:56 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:57.161 12:46:56 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:57.161 12:46:56 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:57.161 12:46:56 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:57.161 12:46:56 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:57.161 12:46:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:57.161 Initializing NVMe Controllers 00:15:57.161 Attaching to 0000:00:10.0 00:15:57.161 Attaching to 0000:00:11.0 00:15:57.161 Attached to 0000:00:11.0 00:15:57.161 Attached to 0000:00:10.0 00:15:57.161 Initialization complete. Starting I/O... 00:15:57.161 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:15:57.161 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:15:57.161 00:15:58.530 QEMU NVMe Ctrl (12341 ): 3988 I/Os completed (+3988) 00:15:58.530 QEMU NVMe Ctrl (12340 ): 4038 I/Os completed (+4038) 00:15:58.530 00:15:59.464 QEMU NVMe Ctrl (12341 ): 7932 I/Os completed (+3944) 00:15:59.464 QEMU NVMe Ctrl (12340 ): 8285 I/Os completed (+4247) 00:15:59.464 00:16:00.398 QEMU NVMe Ctrl (12341 ): 12917 I/Os completed (+4985) 00:16:00.398 QEMU NVMe Ctrl (12340 ): 13607 I/Os completed (+5322) 00:16:00.398 00:16:01.331 QEMU NVMe Ctrl (12341 ): 17254 I/Os completed (+4337) 00:16:01.331 QEMU NVMe Ctrl (12340 ): 17960 I/Os completed (+4353) 00:16:01.331 00:16:02.291 QEMU NVMe Ctrl (12341 ): 21080 I/Os completed (+3826) 00:16:02.291 QEMU NVMe Ctrl (12340 ): 22125 I/Os completed (+4165) 00:16:02.291 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:03.231 [2024-12-05 12:47:02.811437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:03.231 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:03.231 [2024-12-05 12:47:02.812559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.812604] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.812619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.812638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:03.231 [2024-12-05 12:47:02.813922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.813967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.813982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.813997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:03.231 [2024-12-05 12:47:02.837218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:03.231 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:03.231 [2024-12-05 12:47:02.838349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.838555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.838580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.838595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:03.231 [2024-12-05 12:47:02.839916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.840016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.840087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 [2024-12-05 12:47:02.840104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:03.231 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:03.231 EAL: Scan for (pci) bus failed. 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:03.231 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:03.231 12:47:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:03.231 Attaching to 0000:00:10.0 00:16:03.231 Attached to 0000:00:10.0 00:16:03.231 12:47:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:03.492 12:47:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:03.492 12:47:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:03.492 Attaching to 0000:00:11.0 00:16:03.492 Attached to 0000:00:11.0 00:16:04.429 QEMU NVMe Ctrl (12340 ): 3701 I/Os completed (+3701) 00:16:04.429 QEMU NVMe Ctrl (12341 ): 3304 I/Os completed (+3304) 00:16:04.429 00:16:05.367 QEMU NVMe Ctrl (12340 ): 7975 I/Os completed (+4274) 00:16:05.367 QEMU NVMe Ctrl (12341 ): 7256 I/Os completed (+3952) 00:16:05.367 00:16:06.412 QEMU NVMe Ctrl (12340 ): 11728 I/Os completed (+3753) 00:16:06.412 QEMU NVMe Ctrl (12341 ): 11197 I/Os completed (+3941) 00:16:06.412 00:16:07.355 QEMU NVMe Ctrl (12340 ): 15166 I/Os completed (+3438) 00:16:07.355 QEMU NVMe Ctrl (12341 ): 14665 I/Os completed (+3468) 00:16:07.355 00:16:08.298 QEMU NVMe Ctrl (12340 ): 18614 I/Os completed (+3448) 00:16:08.298 QEMU NVMe Ctrl (12341 ): 18153 I/Os completed (+3488) 00:16:08.298 00:16:09.240 QEMU NVMe Ctrl (12340 ): 22084 I/Os completed (+3470) 00:16:09.240 QEMU NVMe Ctrl (12341 ): 21620 I/Os completed (+3467) 00:16:09.240 00:16:10.182 QEMU NVMe Ctrl (12340 ): 25507 I/Os completed (+3423) 00:16:10.182 QEMU NVMe Ctrl (12341 ): 25038 I/Os completed (+3418) 00:16:10.182 00:16:11.569 QEMU NVMe Ctrl (12340 ): 28943 I/Os completed (+3436) 00:16:11.569 QEMU NVMe Ctrl (12341 ): 28555 I/Os completed (+3517) 00:16:11.569 00:16:12.509 QEMU NVMe Ctrl (12340 ): 32314 I/Os completed (+3371) 00:16:12.509 QEMU NVMe Ctrl (12341 ): 31933 I/Os completed (+3378) 00:16:12.509 00:16:13.440 QEMU NVMe Ctrl (12340 ): 36031 I/Os completed (+3717) 00:16:13.440 QEMU NVMe Ctrl (12341 ): 35816 I/Os completed (+3883) 00:16:13.440 00:16:14.369 QEMU NVMe Ctrl (12340 ): 39984 I/Os completed (+3953) 00:16:14.369 QEMU NVMe Ctrl (12341 ): 40303 I/Os completed (+4487) 00:16:14.369 00:16:15.306 QEMU NVMe Ctrl (12340 ): 43458 I/Os completed (+3474) 00:16:15.306 QEMU NVMe Ctrl (12341 ): 43930 I/Os completed (+3627) 00:16:15.306 00:16:15.306 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:15.306 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:15.306 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:15.306 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:15.306 [2024-12-05 12:47:15.094224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:15.306 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:15.306 [2024-12-05 12:47:15.095681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.095820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.095857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.095925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:15.306 [2024-12-05 12:47:15.097409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.097516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.097535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.097552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:15.306 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:15.306 [2024-12-05 12:47:15.116553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:15.306 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:15.306 [2024-12-05 12:47:15.117695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.118237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.118331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.118373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:15.306 [2024-12-05 12:47:15.121406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.121476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.121512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 [2024-12-05 12:47:15.121537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:15.306 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:15.306 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:15.306 EAL: Scan for (pci) bus failed. 00:16:15.306 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:15.565 Attaching to 0000:00:10.0 00:16:15.565 Attached to 0000:00:10.0 00:16:15.565 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:15.824 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:15.824 12:47:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:15.824 Attaching to 0000:00:11.0 00:16:15.824 Attached to 0000:00:11.0 00:16:16.395 QEMU NVMe Ctrl (12340 ): 2168 I/Os completed (+2168) 00:16:16.395 QEMU NVMe Ctrl (12341 ): 1884 I/Os completed (+1884) 00:16:16.395 00:16:17.350 QEMU NVMe Ctrl (12340 ): 5588 I/Os completed (+3420) 00:16:17.350 QEMU NVMe Ctrl (12341 ): 5309 I/Os completed (+3425) 00:16:17.350 00:16:18.292 QEMU NVMe Ctrl (12340 ): 8932 I/Os completed (+3344) 00:16:18.292 QEMU NVMe Ctrl (12341 ): 8653 I/Os completed (+3344) 00:16:18.292 00:16:19.226 QEMU NVMe Ctrl (12340 ): 12394 I/Os completed (+3462) 00:16:19.226 QEMU NVMe Ctrl (12341 ): 12338 I/Os completed (+3685) 00:16:19.226 00:16:20.161 QEMU NVMe Ctrl (12340 ): 16079 I/Os completed (+3685) 00:16:20.161 QEMU NVMe Ctrl (12341 ): 16213 I/Os completed (+3875) 00:16:20.161 00:16:21.533 QEMU NVMe Ctrl (12340 ): 19660 I/Os completed (+3581) 00:16:21.533 QEMU NVMe Ctrl (12341 ): 20095 I/Os completed (+3882) 00:16:21.533 00:16:22.470 QEMU NVMe Ctrl (12340 ): 23118 I/Os completed (+3458) 00:16:22.470 QEMU NVMe Ctrl (12341 ): 23652 I/Os completed (+3557) 00:16:22.470 00:16:23.407 QEMU NVMe Ctrl (12340 ): 26538 I/Os completed (+3420) 00:16:23.407 QEMU NVMe Ctrl (12341 ): 27088 I/Os completed (+3436) 00:16:23.407 00:16:24.347 QEMU NVMe Ctrl (12340 ): 29976 I/Os completed (+3438) 00:16:24.347 QEMU NVMe Ctrl (12341 ): 30711 I/Os completed (+3623) 00:16:24.347 00:16:25.440 QEMU NVMe Ctrl (12340 ): 33364 I/Os completed (+3388) 00:16:25.440 QEMU NVMe Ctrl (12341 ): 34092 I/Os completed (+3381) 00:16:25.440 00:16:26.389 QEMU NVMe Ctrl (12340 ): 36733 I/Os completed (+3369) 00:16:26.389 QEMU NVMe Ctrl (12341 ): 37484 I/Os completed (+3392) 00:16:26.389 00:16:27.330 QEMU NVMe Ctrl (12340 ): 39961 I/Os completed (+3228) 00:16:27.330 QEMU NVMe Ctrl (12341 ): 40715 I/Os completed (+3231) 00:16:27.330 00:16:27.591 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:27.591 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:27.591 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:27.591 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:27.591 [2024-12-05 12:47:27.431247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:27.591 Controller removed: QEMU NVMe Ctrl (12340 ) 00:16:27.591 [2024-12-05 12:47:27.433256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.591 [2024-12-05 12:47:27.433387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.591 [2024-12-05 12:47:27.433455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.591 [2024-12-05 12:47:27.433478] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.591 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:27.591 [2024-12-05 12:47:27.437583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.591 [2024-12-05 12:47:27.437633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.591 [2024-12-05 12:47:27.437652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.591 [2024-12-05 12:47:27.437667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.851 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:27.851 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:27.851 [2024-12-05 12:47:27.456086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:27.851 Controller removed: QEMU NVMe Ctrl (12341 ) 00:16:27.851 [2024-12-05 12:47:27.457115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.851 [2024-12-05 12:47:27.457151] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.851 [2024-12-05 12:47:27.457168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.851 [2024-12-05 12:47:27.457181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.851 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:27.851 [2024-12-05 12:47:27.458292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.851 [2024-12-05 12:47:27.458326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.851 [2024-12-05 12:47:27.458343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.851 [2024-12-05 12:47:27.458356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.852 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:16:27.852 EAL: Scan for (pci) bus failed. 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:27.852 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:27.852 Attaching to 0000:00:10.0 00:16:27.852 Attached to 0000:00:10.0 00:16:28.112 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:28.112 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:28.112 12:47:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:28.112 Attaching to 0000:00:11.0 00:16:28.112 Attached to 0000:00:11.0 00:16:28.112 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:16:28.112 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:16:28.112 [2024-12-05 12:47:27.768305] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:16:40.338 12:47:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:16:40.338 12:47:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:40.338 12:47:39 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.96 00:16:40.338 12:47:39 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.96 00:16:40.338 12:47:39 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:40.338 12:47:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.96 00:16:40.338 12:47:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.96 2 00:16:40.338 remove_attach_helper took 42.96s to complete (handling 2 nvme drive(s)) 12:47:39 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78698 00:16:46.930 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78698) - No such process 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78698 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79248 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79248 00:16:46.930 12:47:45 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:46.930 12:47:45 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 79248 ']' 00:16:46.930 12:47:45 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.930 12:47:45 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.930 12:47:45 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.930 12:47:45 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.930 12:47:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:46.930 [2024-12-05 12:47:45.859273] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:16:46.930 [2024-12-05 12:47:45.859827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79248 ] 00:16:46.930 [2024-12-05 12:47:46.020131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.930 [2024-12-05 12:47:46.047410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:46.930 12:47:46 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:46.930 12:47:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:53.481 12:47:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.481 12:47:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:53.481 12:47:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:53.481 12:47:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:53.481 [2024-12-05 12:47:52.826886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:53.481 [2024-12-05 12:47:52.828479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.481 [2024-12-05 12:47:52.828519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.481 [2024-12-05 12:47:52.828534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.481 [2024-12-05 12:47:52.828549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.481 [2024-12-05 12:47:52.828561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.481 [2024-12-05 12:47:52.828568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.481 [2024-12-05 12:47:52.828579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.481 [2024-12-05 12:47:52.828586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.481 [2024-12-05 12:47:52.828595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.481 [2024-12-05 12:47:52.828602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.481 [2024-12-05 12:47:52.828611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.481 [2024-12-05 12:47:52.828618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.481 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:53.481 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:53.481 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:53.481 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:53.481 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:53.482 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:53.482 12:47:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.482 12:47:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:53.482 [2024-12-05 12:47:53.326892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:53.482 12:47:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.482 [2024-12-05 12:47:53.328499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.482 [2024-12-05 12:47:53.328537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.482 [2024-12-05 12:47:53.328551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.482 [2024-12-05 12:47:53.328569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.482 [2024-12-05 12:47:53.328578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.482 [2024-12-05 12:47:53.328589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.482 [2024-12-05 12:47:53.328598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.482 [2024-12-05 12:47:53.328608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.482 [2024-12-05 12:47:53.328616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.482 [2024-12-05 12:47:53.328629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:53.482 [2024-12-05 12:47:53.328637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.482 [2024-12-05 12:47:53.328648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.740 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:53.740 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:53.997 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:53.997 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:54.255 12:47:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.255 12:47:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:54.255 12:47:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:54.255 12:47:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:54.255 12:47:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:54.255 12:47:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:54.255 12:47:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:54.255 12:47:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:54.255 12:47:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:54.513 12:47:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:54.513 12:47:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:54.513 12:47:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:06.802 12:48:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.802 12:48:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:06.802 12:48:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:06.802 12:48:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:06.802 12:48:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:06.802 [2024-12-05 12:48:06.227092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:06.802 [2024-12-05 12:48:06.228607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.802 [2024-12-05 12:48:06.228712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.802 [2024-12-05 12:48:06.228732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 12:48:06.228747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.802 [2024-12-05 12:48:06.228756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.802 [2024-12-05 12:48:06.228764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 12:48:06.228773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.802 [2024-12-05 12:48:06.228800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.802 [2024-12-05 12:48:06.228820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 12:48:06.228827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.802 [2024-12-05 12:48:06.228835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.802 [2024-12-05 12:48:06.228842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.802 12:48:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:06.802 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:06.802 [2024-12-05 12:48:06.627124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:06.802 [2024-12-05 12:48:06.628794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.802 [2024-12-05 12:48:06.628841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.802 [2024-12-05 12:48:06.628853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 12:48:06.628870] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.802 [2024-12-05 12:48:06.628878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.802 [2024-12-05 12:48:06.628887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.802 [2024-12-05 12:48:06.628895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.803 [2024-12-05 12:48:06.628908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.803 [2024-12-05 12:48:06.628915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.803 [2024-12-05 12:48:06.628924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:06.803 [2024-12-05 12:48:06.628931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.803 [2024-12-05 12:48:06.628940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:07.061 12:48:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:07.061 12:48:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:07.061 12:48:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:07.061 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:07.380 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:07.380 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:07.380 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:07.380 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:07.380 12:48:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:07.380 12:48:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:07.380 12:48:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:07.380 12:48:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:19.571 12:48:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.571 12:48:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:19.571 12:48:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:19.571 12:48:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.571 12:48:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:19.571 12:48:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.571 [2024-12-05 12:48:19.127309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:19.571 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:19.571 [2024-12-05 12:48:19.128793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.571 [2024-12-05 12:48:19.128830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.571 [2024-12-05 12:48:19.128846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.571 [2024-12-05 12:48:19.128861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.572 [2024-12-05 12:48:19.128874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.572 [2024-12-05 12:48:19.128881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.572 [2024-12-05 12:48:19.128890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.572 [2024-12-05 12:48:19.128897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.572 [2024-12-05 12:48:19.128906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.572 [2024-12-05 12:48:19.128913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.572 [2024-12-05 12:48:19.128922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.572 [2024-12-05 12:48:19.128928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.829 [2024-12-05 12:48:19.627329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:19.829 [2024-12-05 12:48:19.629038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.829 [2024-12-05 12:48:19.629160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.829 [2024-12-05 12:48:19.629234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.829 [2024-12-05 12:48:19.629299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.829 [2024-12-05 12:48:19.629319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.829 [2024-12-05 12:48:19.629347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.829 [2024-12-05 12:48:19.629380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.829 [2024-12-05 12:48:19.629430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.829 [2024-12-05 12:48:19.629469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.829 [2024-12-05 12:48:19.629506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:19.829 [2024-12-05 12:48:19.629524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.829 [2024-12-05 12:48:19.629573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.829 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:19.829 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:19.829 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:19.829 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:19.829 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:19.829 12:48:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.829 12:48:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:19.829 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:19.829 12:48:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.829 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:19.829 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:20.087 12:48:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.22 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.22 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.22 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.22 2 00:17:32.355 remove_attach_helper took 45.22s to complete (handling 2 nvme drive(s)) 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:17:32.355 12:48:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:17:32.355 12:48:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:17:38.924 12:48:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:38.925 12:48:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:38.925 12:48:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:38.925 12:48:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.925 12:48:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:38.925 12:48:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:38.925 [2024-12-05 12:48:38.075483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:38.925 [2024-12-05 12:48:38.076906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.925 [2024-12-05 12:48:38.076948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.925 [2024-12-05 12:48:38.076964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.925 [2024-12-05 12:48:38.076980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.925 [2024-12-05 12:48:38.076991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.925 [2024-12-05 12:48:38.076999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.925 [2024-12-05 12:48:38.077008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.925 [2024-12-05 12:48:38.077015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.925 [2024-12-05 12:48:38.077027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.925 [2024-12-05 12:48:38.077034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.925 [2024-12-05 12:48:38.077043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.925 [2024-12-05 12:48:38.077050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:38.925 12:48:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.925 12:48:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:38.925 12:48:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:17:38.925 12:48:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:38.925 [2024-12-05 12:48:38.675493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:38.925 [2024-12-05 12:48:38.676702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.925 [2024-12-05 12:48:38.676751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.925 [2024-12-05 12:48:38.676765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.925 [2024-12-05 12:48:38.676785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.925 [2024-12-05 12:48:38.676794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.925 [2024-12-05 12:48:38.676804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.925 [2024-12-05 12:48:38.676826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.925 [2024-12-05 12:48:38.676838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.925 [2024-12-05 12:48:38.676846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.925 [2024-12-05 12:48:38.676856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:38.925 [2024-12-05 12:48:38.676863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:38.925 [2024-12-05 12:48:38.676875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:39.490 12:48:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.490 12:48:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:39.490 12:48:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:39.490 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:39.748 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:39.749 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:39.749 12:48:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:52.013 12:48:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.013 12:48:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:52.013 12:48:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:52.013 12:48:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.013 12:48:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:52.013 [2024-12-05 12:48:51.475684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:17:52.013 [2024-12-05 12:48:51.476840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.013 [2024-12-05 12:48:51.476977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.013 [2024-12-05 12:48:51.477000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.013 [2024-12-05 12:48:51.477017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.013 [2024-12-05 12:48:51.477028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.013 [2024-12-05 12:48:51.477035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.013 [2024-12-05 12:48:51.477045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.013 [2024-12-05 12:48:51.477052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.013 [2024-12-05 12:48:51.477061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.013 [2024-12-05 12:48:51.477069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.013 [2024-12-05 12:48:51.477077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.013 [2024-12-05 12:48:51.477084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.013 12:48:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:17:52.013 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:17:52.272 [2024-12-05 12:48:51.875706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:17:52.272 [2024-12-05 12:48:51.876845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.272 [2024-12-05 12:48:51.876890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.272 [2024-12-05 12:48:51.876902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.272 [2024-12-05 12:48:51.876919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.273 [2024-12-05 12:48:51.876926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.273 [2024-12-05 12:48:51.876935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.273 [2024-12-05 12:48:51.876942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.273 [2024-12-05 12:48:51.876951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.273 [2024-12-05 12:48:51.876958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.273 [2024-12-05 12:48:51.876966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:52.273 [2024-12-05 12:48:51.876973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.273 [2024-12-05 12:48:51.876982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.273 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:17:52.273 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:17:52.273 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:17:52.273 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:17:52.273 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:17:52.273 12:48:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:52.273 12:48:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.273 12:48:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:17:52.273 12:48:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.273 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:17:52.273 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:17:52.273 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:52.273 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:52.273 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:17:52.531 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:17:52.531 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:52.531 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:17:52.531 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:17:52.531 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:17:52.531 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:17:52.531 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:17:52.531 12:48:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:04.759 12:49:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.759 12:49:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:04.759 12:49:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:04.759 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:04.759 12:49:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.759 12:49:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:04.759 [2024-12-05 12:49:04.375912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:04.760 [2024-12-05 12:49:04.377301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:04.760 [2024-12-05 12:49:04.377339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:04.760 [2024-12-05 12:49:04.377354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.760 [2024-12-05 12:49:04.377370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:04.760 [2024-12-05 12:49:04.377383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:04.760 [2024-12-05 12:49:04.377391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.760 [2024-12-05 12:49:04.377399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:04.760 [2024-12-05 12:49:04.377406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:04.760 [2024-12-05 12:49:04.377415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.760 [2024-12-05 12:49:04.377422] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:04.760 [2024-12-05 12:49:04.377430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:04.760 [2024-12-05 12:49:04.377437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:04.760 12:49:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.760 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:18:04.760 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:18:05.018 [2024-12-05 12:49:04.775930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:05.018 [2024-12-05 12:49:04.777354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:05.018 [2024-12-05 12:49:04.777512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.018 [2024-12-05 12:49:04.777530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.018 [2024-12-05 12:49:04.777547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:05.018 [2024-12-05 12:49:04.777556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.018 [2024-12-05 12:49:04.777565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.018 [2024-12-05 12:49:04.777573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:05.018 [2024-12-05 12:49:04.777585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.018 [2024-12-05 12:49:04.777591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.018 [2024-12-05 12:49:04.777600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:05.018 [2024-12-05 12:49:04.777607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:05.018 [2024-12-05 12:49:04.777615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:05.276 12:49:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.276 12:49:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:05.276 12:49:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:05.276 12:49:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:05.276 12:49:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:05.276 12:49:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:05.276 12:49:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:05.276 12:49:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:05.276 12:49:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:05.533 12:49:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:05.533 12:49:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:05.533 12:49:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.20 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.20 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:18:17.734 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:18:17.734 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79248 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 79248 ']' 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 79248 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79248 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79248' 00:18:17.734 killing process with pid 79248 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@973 -- # kill 79248 00:18:17.734 12:49:17 sw_hotplug -- common/autotest_common.sh@978 -- # wait 79248 00:18:17.995 12:49:17 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:18.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:18.513 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:18.513 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:18.513 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:18.772 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:18.772 00:18:18.772 real 2m28.942s 00:18:18.772 user 1m49.820s 00:18:18.772 sys 0m17.953s 00:18:18.772 ************************************ 00:18:18.772 END TEST sw_hotplug 00:18:18.772 12:49:18 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.772 12:49:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:18.772 ************************************ 00:18:18.772 12:49:18 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:18:18.772 12:49:18 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:18.772 12:49:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.772 12:49:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.772 12:49:18 -- common/autotest_common.sh@10 -- # set +x 00:18:18.772 ************************************ 00:18:18.772 START TEST nvme_xnvme 00:18:18.772 ************************************ 00:18:18.772 12:49:18 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:18:18.772 * Looking for test storage... 00:18:18.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:18.772 12:49:18 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:18.772 12:49:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.772 12:49:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:18.772 12:49:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.772 12:49:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.030 12:49:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.030 --rc genhtml_branch_coverage=1 00:18:19.030 --rc genhtml_function_coverage=1 00:18:19.030 --rc genhtml_legend=1 00:18:19.030 --rc geninfo_all_blocks=1 00:18:19.030 --rc geninfo_unexecuted_blocks=1 00:18:19.030 00:18:19.030 ' 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.030 --rc genhtml_branch_coverage=1 00:18:19.030 --rc genhtml_function_coverage=1 00:18:19.030 --rc genhtml_legend=1 00:18:19.030 --rc geninfo_all_blocks=1 00:18:19.030 --rc geninfo_unexecuted_blocks=1 00:18:19.030 00:18:19.030 ' 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.030 --rc genhtml_branch_coverage=1 00:18:19.030 --rc genhtml_function_coverage=1 00:18:19.030 --rc genhtml_legend=1 00:18:19.030 --rc geninfo_all_blocks=1 00:18:19.030 --rc geninfo_unexecuted_blocks=1 00:18:19.030 00:18:19.030 ' 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:19.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.030 --rc genhtml_branch_coverage=1 00:18:19.030 --rc genhtml_function_coverage=1 00:18:19.030 --rc genhtml_legend=1 00:18:19.030 --rc geninfo_all_blocks=1 00:18:19.030 --rc geninfo_unexecuted_blocks=1 00:18:19.030 00:18:19.030 ' 00:18:19.030 12:49:18 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:18:19.030 12:49:18 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:18:19.030 12:49:18 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:18:19.030 12:49:18 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:18:19.031 12:49:18 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:19.031 #define SPDK_CONFIG_H 00:18:19.031 #define SPDK_CONFIG_AIO_FSDEV 1 00:18:19.031 #define SPDK_CONFIG_APPS 1 00:18:19.031 #define SPDK_CONFIG_ARCH native 00:18:19.031 #define SPDK_CONFIG_ASAN 1 00:18:19.031 #undef SPDK_CONFIG_AVAHI 00:18:19.031 #undef SPDK_CONFIG_CET 00:18:19.031 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:18:19.031 #define SPDK_CONFIG_COVERAGE 1 00:18:19.031 #define SPDK_CONFIG_CROSS_PREFIX 00:18:19.031 #undef SPDK_CONFIG_CRYPTO 00:18:19.031 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:19.031 #undef SPDK_CONFIG_CUSTOMOCF 00:18:19.031 #undef SPDK_CONFIG_DAOS 00:18:19.031 #define SPDK_CONFIG_DAOS_DIR 00:18:19.031 #define SPDK_CONFIG_DEBUG 1 00:18:19.031 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:19.031 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:18:19.031 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:18:19.031 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.031 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:19.031 #undef SPDK_CONFIG_DPDK_UADK 00:18:19.031 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:19.031 #define SPDK_CONFIG_EXAMPLES 1 00:18:19.031 #undef SPDK_CONFIG_FC 00:18:19.031 #define SPDK_CONFIG_FC_PATH 00:18:19.031 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:19.031 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:19.031 #define SPDK_CONFIG_FSDEV 1 00:18:19.031 #undef SPDK_CONFIG_FUSE 00:18:19.031 #undef SPDK_CONFIG_FUZZER 00:18:19.031 #define SPDK_CONFIG_FUZZER_LIB 00:18:19.031 #undef SPDK_CONFIG_GOLANG 00:18:19.031 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:19.031 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:19.031 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:19.031 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:19.031 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:19.031 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:19.031 #undef SPDK_CONFIG_HAVE_LZ4 00:18:19.031 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:18:19.031 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:18:19.031 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:19.031 #define SPDK_CONFIG_IDXD 1 00:18:19.031 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:19.031 #undef SPDK_CONFIG_IPSEC_MB 00:18:19.031 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:19.031 #define SPDK_CONFIG_ISAL 1 00:18:19.031 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:19.031 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:19.031 #define SPDK_CONFIG_LIBDIR 00:18:19.031 #undef SPDK_CONFIG_LTO 00:18:19.031 #define SPDK_CONFIG_MAX_LCORES 128 00:18:19.031 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:18:19.031 #define SPDK_CONFIG_NVME_CUSE 1 00:18:19.031 #undef SPDK_CONFIG_OCF 00:18:19.031 #define SPDK_CONFIG_OCF_PATH 00:18:19.031 #define SPDK_CONFIG_OPENSSL_PATH 00:18:19.031 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:19.031 #define SPDK_CONFIG_PGO_DIR 00:18:19.031 #undef SPDK_CONFIG_PGO_USE 00:18:19.031 #define SPDK_CONFIG_PREFIX /usr/local 00:18:19.031 #undef SPDK_CONFIG_RAID5F 00:18:19.031 #undef SPDK_CONFIG_RBD 00:18:19.031 #define SPDK_CONFIG_RDMA 1 00:18:19.031 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:19.031 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:19.031 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:19.031 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:19.031 #define SPDK_CONFIG_SHARED 1 00:18:19.031 #undef SPDK_CONFIG_SMA 00:18:19.031 #define SPDK_CONFIG_TESTS 1 00:18:19.031 #undef SPDK_CONFIG_TSAN 00:18:19.031 #define SPDK_CONFIG_UBLK 1 00:18:19.031 #define SPDK_CONFIG_UBSAN 1 00:18:19.031 #undef SPDK_CONFIG_UNIT_TESTS 00:18:19.031 #undef SPDK_CONFIG_URING 00:18:19.031 #define SPDK_CONFIG_URING_PATH 00:18:19.031 #undef SPDK_CONFIG_URING_ZNS 00:18:19.031 #undef SPDK_CONFIG_USDT 00:18:19.031 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:19.031 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:19.031 #undef SPDK_CONFIG_VFIO_USER 00:18:19.031 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:19.031 #define SPDK_CONFIG_VHOST 1 00:18:19.031 #define SPDK_CONFIG_VIRTIO 1 00:18:19.031 #undef SPDK_CONFIG_VTUNE 00:18:19.031 #define SPDK_CONFIG_VTUNE_DIR 00:18:19.031 #define SPDK_CONFIG_WERROR 1 00:18:19.031 #define SPDK_CONFIG_WPDK_DIR 00:18:19.031 #define SPDK_CONFIG_XNVME 1 00:18:19.031 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:19.031 12:49:18 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.031 12:49:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.031 12:49:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.031 12:49:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.031 12:49:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.031 12:49:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.031 12:49:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.031 12:49:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.031 12:49:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:19.031 12:49:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@68 -- # uname -s 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:18:19.031 12:49:18 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:18:19.031 12:49:18 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@140 -- # : v23.11 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80606 ]] 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80606 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.JdRTH7 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.JdRTH7/tests/xnvme /tmp/spdk.JdRTH7 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13222211584 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6362451968 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13222211584 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6362451968 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:18:19.032 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=90368946176 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=9333833728 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:18:19.033 * Looking for test storage... 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13222211584 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:19.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.033 --rc genhtml_branch_coverage=1 00:18:19.033 --rc genhtml_function_coverage=1 00:18:19.033 --rc genhtml_legend=1 00:18:19.033 --rc geninfo_all_blocks=1 00:18:19.033 --rc geninfo_unexecuted_blocks=1 00:18:19.033 00:18:19.033 ' 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:19.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.033 --rc genhtml_branch_coverage=1 00:18:19.033 --rc genhtml_function_coverage=1 00:18:19.033 --rc genhtml_legend=1 00:18:19.033 --rc geninfo_all_blocks=1 00:18:19.033 --rc geninfo_unexecuted_blocks=1 00:18:19.033 00:18:19.033 ' 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:19.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.033 --rc genhtml_branch_coverage=1 00:18:19.033 --rc genhtml_function_coverage=1 00:18:19.033 --rc genhtml_legend=1 00:18:19.033 --rc geninfo_all_blocks=1 00:18:19.033 --rc geninfo_unexecuted_blocks=1 00:18:19.033 00:18:19.033 ' 00:18:19.033 12:49:18 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:19.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.033 --rc genhtml_branch_coverage=1 00:18:19.033 --rc genhtml_function_coverage=1 00:18:19.033 --rc genhtml_legend=1 00:18:19.033 --rc geninfo_all_blocks=1 00:18:19.033 --rc geninfo_unexecuted_blocks=1 00:18:19.033 00:18:19.033 ' 00:18:19.033 12:49:18 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.033 12:49:18 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.033 12:49:18 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.033 12:49:18 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.033 12:49:18 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.033 12:49:18 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:18:19.033 12:49:18 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:18:19.033 12:49:18 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:19.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.546 Waiting for block devices as requested 00:18:19.546 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.546 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.546 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:19.803 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:25.059 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:25.059 12:49:24 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:18:25.059 12:49:24 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:18:25.059 12:49:24 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:18:25.318 12:49:25 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:18:25.318 12:49:25 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:18:25.318 No valid GPT data, bailing 00:18:25.318 12:49:25 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:25.318 12:49:25 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:18:25.318 12:49:25 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:25.318 12:49:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:25.318 12:49:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.318 12:49:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.318 12:49:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:25.318 ************************************ 00:18:25.318 START TEST xnvme_rpc 00:18:25.318 ************************************ 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80992 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80992 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80992 ']' 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.318 12:49:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.575 [2024-12-05 12:49:25.201608] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:18:25.575 [2024-12-05 12:49:25.201761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80992 ] 00:18:25.575 [2024-12-05 12:49:25.362885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.575 [2024-12-05 12:49:25.400903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 xnvme_bdev 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80992 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80992 ']' 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80992 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80992 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.507 killing process with pid 80992 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80992' 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80992 00:18:26.507 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80992 00:18:26.765 00:18:26.765 real 0m1.477s 00:18:26.765 user 0m1.566s 00:18:26.765 sys 0m0.396s 00:18:26.765 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.765 ************************************ 00:18:26.765 END TEST xnvme_rpc 00:18:26.765 ************************************ 00:18:26.765 12:49:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:27.024 12:49:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:27.024 12:49:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:27.024 12:49:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.024 12:49:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:27.024 ************************************ 00:18:27.024 START TEST xnvme_bdevperf 00:18:27.024 ************************************ 00:18:27.024 12:49:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:27.024 12:49:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:27.024 12:49:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:18:27.024 12:49:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:27.024 12:49:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:27.024 12:49:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:27.024 12:49:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:27.024 12:49:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:27.024 { 00:18:27.024 "subsystems": [ 00:18:27.024 { 00:18:27.024 "subsystem": "bdev", 00:18:27.024 "config": [ 00:18:27.024 { 00:18:27.024 "params": { 00:18:27.024 "io_mechanism": "libaio", 00:18:27.024 "conserve_cpu": false, 00:18:27.024 "filename": "/dev/nvme0n1", 00:18:27.024 "name": "xnvme_bdev" 00:18:27.024 }, 00:18:27.024 "method": "bdev_xnvme_create" 00:18:27.024 }, 00:18:27.024 { 00:18:27.024 "method": "bdev_wait_for_examine" 00:18:27.024 } 00:18:27.024 ] 00:18:27.024 } 00:18:27.024 ] 00:18:27.024 } 00:18:27.024 [2024-12-05 12:49:26.733541] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:18:27.024 [2024-12-05 12:49:26.733768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81050 ] 00:18:27.306 [2024-12-05 12:49:26.908296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.306 [2024-12-05 12:49:26.934731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.306 Running I/O for 5 seconds... 00:18:29.215 35423.00 IOPS, 138.37 MiB/s [2024-12-05T12:49:30.443Z] 35588.00 IOPS, 139.02 MiB/s [2024-12-05T12:49:31.394Z] 36050.00 IOPS, 140.82 MiB/s [2024-12-05T12:49:32.327Z] 36507.25 IOPS, 142.61 MiB/s 00:18:32.476 Latency(us) 00:18:32.476 [2024-12-05T12:49:32.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.476 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:32.476 xnvme_bdev : 5.00 35893.73 140.21 0.00 0.00 1778.53 165.42 9376.69 00:18:32.476 [2024-12-05T12:49:32.328Z] =================================================================================================================== 00:18:32.476 [2024-12-05T12:49:32.328Z] Total : 35893.73 140.21 0.00 0.00 1778.53 165.42 9376.69 00:18:32.476 12:49:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:32.476 12:49:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:32.476 12:49:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:32.476 12:49:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:32.476 12:49:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 { 00:18:32.476 "subsystems": [ 00:18:32.476 { 00:18:32.476 "subsystem": "bdev", 00:18:32.476 "config": [ 00:18:32.476 { 00:18:32.476 "params": { 00:18:32.476 "io_mechanism": "libaio", 00:18:32.476 "conserve_cpu": false, 00:18:32.476 "filename": "/dev/nvme0n1", 00:18:32.476 "name": "xnvme_bdev" 00:18:32.476 }, 00:18:32.476 "method": "bdev_xnvme_create" 00:18:32.476 }, 00:18:32.476 { 00:18:32.476 "method": "bdev_wait_for_examine" 00:18:32.476 } 00:18:32.476 ] 00:18:32.476 } 00:18:32.476 ] 00:18:32.476 } 00:18:32.476 [2024-12-05 12:49:32.314145] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:18:32.476 [2024-12-05 12:49:32.314276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81117 ] 00:18:32.734 [2024-12-05 12:49:32.475245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.734 [2024-12-05 12:49:32.499968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.993 Running I/O for 5 seconds... 00:18:34.974 37025.00 IOPS, 144.63 MiB/s [2024-12-05T12:49:35.756Z] 36784.00 IOPS, 143.69 MiB/s [2024-12-05T12:49:36.687Z] 36861.00 IOPS, 143.99 MiB/s [2024-12-05T12:49:37.626Z] 36874.75 IOPS, 144.04 MiB/s [2024-12-05T12:49:37.626Z] 36966.40 IOPS, 144.40 MiB/s 00:18:37.774 Latency(us) 00:18:37.774 [2024-12-05T12:49:37.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.774 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:37.774 xnvme_bdev : 5.01 36903.99 144.16 0.00 0.00 1729.81 378.09 8872.57 00:18:37.774 [2024-12-05T12:49:37.626Z] =================================================================================================================== 00:18:37.774 [2024-12-05T12:49:37.626Z] Total : 36903.99 144.16 0.00 0.00 1729.81 378.09 8872.57 00:18:38.032 00:18:38.032 real 0m11.164s 00:18:38.032 user 0m3.058s 00:18:38.032 sys 0m5.545s 00:18:38.032 12:49:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.032 ************************************ 00:18:38.032 END TEST xnvme_bdevperf 00:18:38.033 ************************************ 00:18:38.033 12:49:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:38.033 12:49:37 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:38.033 12:49:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.033 12:49:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.033 12:49:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.033 ************************************ 00:18:38.033 START TEST xnvme_fio_plugin 00:18:38.033 ************************************ 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:38.033 12:49:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:38.291 { 00:18:38.291 "subsystems": [ 00:18:38.291 { 00:18:38.291 "subsystem": "bdev", 00:18:38.291 "config": [ 00:18:38.291 { 00:18:38.291 "params": { 00:18:38.291 "io_mechanism": "libaio", 00:18:38.291 "conserve_cpu": false, 00:18:38.291 "filename": "/dev/nvme0n1", 00:18:38.291 "name": "xnvme_bdev" 00:18:38.291 }, 00:18:38.291 "method": "bdev_xnvme_create" 00:18:38.291 }, 00:18:38.291 { 00:18:38.291 "method": "bdev_wait_for_examine" 00:18:38.291 } 00:18:38.291 ] 00:18:38.291 } 00:18:38.291 ] 00:18:38.291 } 00:18:38.291 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:38.291 fio-3.35 00:18:38.291 Starting 1 thread 00:18:44.854 00:18:44.854 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81227: Thu Dec 5 12:49:43 2024 00:18:44.854 read: IOPS=40.6k, BW=158MiB/s (166MB/s)(792MiB/5001msec) 00:18:44.854 slat (usec): min=3, max=2193, avg=20.24, stdev=53.66 00:18:44.854 clat (usec): min=79, max=5007, avg=986.08, stdev=565.99 00:18:44.854 lat (usec): min=114, max=5078, avg=1006.32, stdev=566.08 00:18:44.854 clat percentiles (usec): 00:18:44.854 | 1.00th=[ 182], 5.00th=[ 269], 10.00th=[ 351], 20.00th=[ 502], 00:18:44.854 | 30.00th=[ 652], 40.00th=[ 775], 50.00th=[ 898], 60.00th=[ 1020], 00:18:44.854 | 70.00th=[ 1172], 80.00th=[ 1385], 90.00th=[ 1713], 95.00th=[ 2057], 00:18:44.854 | 99.00th=[ 2835], 99.50th=[ 3195], 99.90th=[ 3851], 99.95th=[ 4113], 00:18:44.854 | 99.99th=[ 4621] 00:18:44.854 bw ( KiB/s): min=142248, max=192376, per=99.58%, avg=161526.22, stdev=13966.40, samples=9 00:18:44.854 iops : min=35562, max=48094, avg=40381.56, stdev=3491.60, samples=9 00:18:44.854 lat (usec) : 100=0.01%, 250=3.99%, 500=15.96%, 750=17.90%, 1000=20.62% 00:18:44.854 lat (msec) : 2=35.96%, 4=5.48%, 10=0.07% 00:18:44.854 cpu : usr=32.28%, sys=52.98%, ctx=34, majf=0, minf=773 00:18:44.854 IO depths : 1=0.2%, 2=1.2%, 4=3.9%, 8=10.0%, 16=24.8%, 32=58.0%, >=64=1.9% 00:18:44.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.854 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:18:44.854 issued rwts: total=202802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.854 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:44.854 00:18:44.854 Run status group 0 (all jobs): 00:18:44.854 READ: bw=158MiB/s (166MB/s), 158MiB/s-158MiB/s (166MB/s-166MB/s), io=792MiB (831MB), run=5001-5001msec 00:18:44.854 ----------------------------------------------------- 00:18:44.854 Suppressions used: 00:18:44.854 count bytes template 00:18:44.854 1 11 /usr/src/fio/parse.c 00:18:44.854 1 8 libtcmalloc_minimal.so 00:18:44.854 1 904 libcrypto.so 00:18:44.854 ----------------------------------------------------- 00:18:44.854 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:44.854 12:49:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:44.854 { 00:18:44.854 "subsystems": [ 00:18:44.854 { 00:18:44.854 "subsystem": "bdev", 00:18:44.854 "config": [ 00:18:44.854 { 00:18:44.854 "params": { 00:18:44.854 "io_mechanism": "libaio", 00:18:44.854 "conserve_cpu": false, 00:18:44.854 "filename": "/dev/nvme0n1", 00:18:44.854 "name": "xnvme_bdev" 00:18:44.854 }, 00:18:44.854 "method": "bdev_xnvme_create" 00:18:44.854 }, 00:18:44.854 { 00:18:44.854 "method": "bdev_wait_for_examine" 00:18:44.854 } 00:18:44.854 ] 00:18:44.854 } 00:18:44.854 ] 00:18:44.854 } 00:18:44.854 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:44.854 fio-3.35 00:18:44.854 Starting 1 thread 00:18:50.140 00:18:50.140 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81313: Thu Dec 5 12:49:49 2024 00:18:50.140 write: IOPS=38.9k, BW=152MiB/s (159MB/s)(759MiB/5001msec); 0 zone resets 00:18:50.140 slat (usec): min=3, max=1683, avg=20.61, stdev=68.94 00:18:50.140 clat (usec): min=82, max=4418, avg=1063.00, stdev=551.94 00:18:50.140 lat (usec): min=153, max=4551, avg=1083.61, stdev=549.30 00:18:50.140 clat percentiles (usec): 00:18:50.140 | 1.00th=[ 194], 5.00th=[ 302], 10.00th=[ 404], 20.00th=[ 586], 00:18:50.140 | 30.00th=[ 742], 40.00th=[ 873], 50.00th=[ 988], 60.00th=[ 1123], 00:18:50.141 | 70.00th=[ 1270], 80.00th=[ 1483], 90.00th=[ 1795], 95.00th=[ 2057], 00:18:50.141 | 99.00th=[ 2769], 99.50th=[ 2999], 99.90th=[ 3556], 99.95th=[ 3720], 00:18:50.141 | 99.99th=[ 4228] 00:18:50.141 bw ( KiB/s): min=129912, max=185576, per=99.82%, avg=155181.33, stdev=15404.77, samples=9 00:18:50.141 iops : min=32480, max=46394, avg=38795.33, stdev=3850.89, samples=9 00:18:50.141 lat (usec) : 100=0.01%, 250=2.90%, 500=12.13%, 750=15.73%, 1000=20.02% 00:18:50.141 lat (msec) : 2=43.38%, 4=5.81%, 10=0.02% 00:18:50.141 cpu : usr=33.80%, sys=54.58%, ctx=28, majf=0, minf=774 00:18:50.141 IO depths : 1=0.3%, 2=1.0%, 4=3.4%, 8=9.5%, 16=24.3%, 32=59.5%, >=64=1.9% 00:18:50.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.141 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:18:50.141 issued rwts: total=0,194367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.141 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:50.141 00:18:50.141 Run status group 0 (all jobs): 00:18:50.141 WRITE: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=759MiB (796MB), run=5001-5001msec 00:18:50.141 ----------------------------------------------------- 00:18:50.141 Suppressions used: 00:18:50.141 count bytes template 00:18:50.141 1 11 /usr/src/fio/parse.c 00:18:50.141 1 8 libtcmalloc_minimal.so 00:18:50.141 1 904 libcrypto.so 00:18:50.141 ----------------------------------------------------- 00:18:50.141 00:18:50.141 00:18:50.141 real 0m12.045s 00:18:50.141 user 0m4.419s 00:18:50.141 sys 0m5.926s 00:18:50.141 ************************************ 00:18:50.141 12:49:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.141 12:49:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:50.141 END TEST xnvme_fio_plugin 00:18:50.141 ************************************ 00:18:50.141 12:49:49 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:50.141 12:49:49 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:50.141 12:49:49 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:50.141 12:49:49 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:50.141 12:49:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:50.141 12:49:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.141 12:49:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.141 ************************************ 00:18:50.141 START TEST xnvme_rpc 00:18:50.141 ************************************ 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81394 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81394 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81394 ']' 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.141 12:49:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.400 [2024-12-05 12:49:50.081304] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:18:50.400 [2024-12-05 12:49:50.081514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81394 ] 00:18:50.400 [2024-12-05 12:49:50.248969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.660 [2024-12-05 12:49:50.273977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.230 xnvme_bdev 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:51.230 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:51.231 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:51.231 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.231 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.231 12:49:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.231 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:51.231 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:51.231 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:51.231 12:49:50 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81394 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81394 ']' 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81394 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.231 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81394 00:18:51.491 killing process with pid 81394 00:18:51.491 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.491 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.491 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81394' 00:18:51.491 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81394 00:18:51.491 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81394 00:18:51.752 00:18:51.752 real 0m1.446s 00:18:51.752 user 0m1.538s 00:18:51.752 sys 0m0.383s 00:18:51.752 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.752 12:49:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:51.752 ************************************ 00:18:51.752 END TEST xnvme_rpc 00:18:51.752 ************************************ 00:18:51.752 12:49:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:51.752 12:49:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:51.752 12:49:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.752 12:49:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:51.752 ************************************ 00:18:51.752 START TEST xnvme_bdevperf 00:18:51.752 ************************************ 00:18:51.752 12:49:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:51.752 12:49:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:51.752 12:49:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:18:51.752 12:49:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:51.752 12:49:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:51.752 12:49:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:51.752 12:49:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:51.752 12:49:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:51.752 { 00:18:51.752 "subsystems": [ 00:18:51.752 { 00:18:51.752 "subsystem": "bdev", 00:18:51.752 "config": [ 00:18:51.752 { 00:18:51.752 "params": { 00:18:51.752 "io_mechanism": "libaio", 00:18:51.752 "conserve_cpu": true, 00:18:51.752 "filename": "/dev/nvme0n1", 00:18:51.752 "name": "xnvme_bdev" 00:18:51.752 }, 00:18:51.752 "method": "bdev_xnvme_create" 00:18:51.752 }, 00:18:51.752 { 00:18:51.752 "method": "bdev_wait_for_examine" 00:18:51.752 } 00:18:51.752 ] 00:18:51.752 } 00:18:51.752 ] 00:18:51.752 } 00:18:51.752 [2024-12-05 12:49:51.556718] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:18:51.752 [2024-12-05 12:49:51.556864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81446 ] 00:18:52.014 [2024-12-05 12:49:51.707883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.014 [2024-12-05 12:49:51.732898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.014 Running I/O for 5 seconds... 00:18:54.345 29161.00 IOPS, 113.91 MiB/s [2024-12-05T12:49:55.134Z] 31280.50 IOPS, 122.19 MiB/s [2024-12-05T12:49:56.072Z] 30931.00 IOPS, 120.82 MiB/s [2024-12-05T12:49:57.038Z] 30436.75 IOPS, 118.89 MiB/s 00:18:57.186 Latency(us) 00:18:57.186 [2024-12-05T12:49:57.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.186 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:57.186 xnvme_bdev : 5.00 30002.63 117.20 0.00 0.00 2128.33 211.10 6225.92 00:18:57.186 [2024-12-05T12:49:57.038Z] =================================================================================================================== 00:18:57.186 [2024-12-05T12:49:57.038Z] Total : 30002.63 117.20 0.00 0.00 2128.33 211.10 6225.92 00:18:57.446 12:49:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:57.446 12:49:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:57.446 12:49:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:57.446 12:49:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:57.446 12:49:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:57.446 { 00:18:57.446 "subsystems": [ 00:18:57.446 { 00:18:57.446 "subsystem": "bdev", 00:18:57.446 "config": [ 00:18:57.446 { 00:18:57.446 "params": { 00:18:57.446 "io_mechanism": "libaio", 00:18:57.446 "conserve_cpu": true, 00:18:57.446 "filename": "/dev/nvme0n1", 00:18:57.446 "name": "xnvme_bdev" 00:18:57.446 }, 00:18:57.446 "method": "bdev_xnvme_create" 00:18:57.446 }, 00:18:57.446 { 00:18:57.446 "method": "bdev_wait_for_examine" 00:18:57.446 } 00:18:57.446 ] 00:18:57.447 } 00:18:57.447 ] 00:18:57.447 } 00:18:57.447 [2024-12-05 12:49:57.100602] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:18:57.447 [2024-12-05 12:49:57.100869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81516 ] 00:18:57.447 [2024-12-05 12:49:57.254377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.447 [2024-12-05 12:49:57.279713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.707 Running I/O for 5 seconds... 00:18:59.595 31863.00 IOPS, 124.46 MiB/s [2024-12-05T12:50:00.833Z] 31359.00 IOPS, 122.50 MiB/s [2024-12-05T12:50:01.405Z] 31525.00 IOPS, 123.14 MiB/s [2024-12-05T12:50:02.791Z] 29297.75 IOPS, 114.44 MiB/s [2024-12-05T12:50:02.791Z] 24742.20 IOPS, 96.65 MiB/s 00:19:02.939 Latency(us) 00:19:02.939 [2024-12-05T12:50:02.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.939 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:02.939 xnvme_bdev : 5.01 24685.55 96.43 0.00 0.00 2584.19 41.16 37305.11 00:19:02.939 [2024-12-05T12:50:02.791Z] =================================================================================================================== 00:19:02.939 [2024-12-05T12:50:02.791Z] Total : 24685.55 96.43 0.00 0.00 2584.19 41.16 37305.11 00:19:02.939 ************************************ 00:19:02.939 END TEST xnvme_bdevperf 00:19:02.939 ************************************ 00:19:02.939 00:19:02.939 real 0m11.108s 00:19:02.939 user 0m4.032s 00:19:02.939 sys 0m5.888s 00:19:02.939 12:50:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.939 12:50:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:02.939 12:50:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:02.939 12:50:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:02.939 12:50:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.939 12:50:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.939 ************************************ 00:19:02.939 START TEST xnvme_fio_plugin 00:19:02.939 ************************************ 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:02.939 12:50:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:02.939 { 00:19:02.939 "subsystems": [ 00:19:02.939 { 00:19:02.939 "subsystem": "bdev", 00:19:02.939 "config": [ 00:19:02.939 { 00:19:02.939 "params": { 00:19:02.939 "io_mechanism": "libaio", 00:19:02.939 "conserve_cpu": true, 00:19:02.939 "filename": "/dev/nvme0n1", 00:19:02.939 "name": "xnvme_bdev" 00:19:02.939 }, 00:19:02.939 "method": "bdev_xnvme_create" 00:19:02.939 }, 00:19:02.939 { 00:19:02.939 "method": "bdev_wait_for_examine" 00:19:02.939 } 00:19:02.939 ] 00:19:02.939 } 00:19:02.939 ] 00:19:02.939 } 00:19:03.200 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:03.200 fio-3.35 00:19:03.200 Starting 1 thread 00:19:08.621 00:19:08.621 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81624: Thu Dec 5 12:50:08 2024 00:19:08.621 read: IOPS=38.4k, BW=150MiB/s (157MB/s)(750MiB/5001msec) 00:19:08.621 slat (usec): min=4, max=1833, avg=20.46, stdev=75.33 00:19:08.621 clat (usec): min=105, max=4837, avg=1099.17, stdev=520.23 00:19:08.621 lat (usec): min=139, max=4887, avg=1119.64, stdev=516.54 00:19:08.621 clat percentiles (usec): 00:19:08.621 | 1.00th=[ 221], 5.00th=[ 343], 10.00th=[ 482], 20.00th=[ 660], 00:19:08.621 | 30.00th=[ 799], 40.00th=[ 922], 50.00th=[ 1045], 60.00th=[ 1172], 00:19:08.621 | 70.00th=[ 1319], 80.00th=[ 1500], 90.00th=[ 1745], 95.00th=[ 1991], 00:19:08.621 | 99.00th=[ 2737], 99.50th=[ 3032], 99.90th=[ 3556], 99.95th=[ 3851], 00:19:08.621 | 99.99th=[ 4293] 00:19:08.621 bw ( KiB/s): min=140184, max=166536, per=99.39%, avg=152735.11, stdev=7274.69, samples=9 00:19:08.621 iops : min=35046, max=41634, avg=38183.78, stdev=1818.67, samples=9 00:19:08.621 lat (usec) : 250=1.81%, 500=8.90%, 750=15.70%, 1000=19.79% 00:19:08.621 lat (msec) : 2=48.88%, 4=4.88%, 10=0.03% 00:19:08.621 cpu : usr=36.72%, sys=53.98%, ctx=14, majf=0, minf=773 00:19:08.621 IO depths : 1=0.3%, 2=1.0%, 4=3.2%, 8=9.4%, 16=24.4%, 32=59.8%, >=64=2.0% 00:19:08.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.621 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:19:08.621 issued rwts: total=192121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.621 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:08.621 00:19:08.621 Run status group 0 (all jobs): 00:19:08.621 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=750MiB (787MB), run=5001-5001msec 00:19:08.901 ----------------------------------------------------- 00:19:08.901 Suppressions used: 00:19:08.901 count bytes template 00:19:08.901 1 11 /usr/src/fio/parse.c 00:19:08.901 1 8 libtcmalloc_minimal.so 00:19:08.901 1 904 libcrypto.so 00:19:08.901 ----------------------------------------------------- 00:19:08.901 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:08.901 12:50:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:08.901 { 00:19:08.901 "subsystems": [ 00:19:08.901 { 00:19:08.901 "subsystem": "bdev", 00:19:08.901 "config": [ 00:19:08.901 { 00:19:08.901 "params": { 00:19:08.901 "io_mechanism": "libaio", 00:19:08.901 "conserve_cpu": true, 00:19:08.901 "filename": "/dev/nvme0n1", 00:19:08.901 "name": "xnvme_bdev" 00:19:08.901 }, 00:19:08.901 "method": "bdev_xnvme_create" 00:19:08.901 }, 00:19:08.901 { 00:19:08.901 "method": "bdev_wait_for_examine" 00:19:08.901 } 00:19:08.901 ] 00:19:08.901 } 00:19:08.901 ] 00:19:08.901 } 00:19:09.160 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:09.160 fio-3.35 00:19:09.160 Starting 1 thread 00:19:14.505 00:19:14.505 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81709: Thu Dec 5 12:50:14 2024 00:19:14.505 write: IOPS=38.3k, BW=150MiB/s (157MB/s)(748MiB/5001msec); 0 zone resets 00:19:14.505 slat (usec): min=4, max=1760, avg=20.95, stdev=74.10 00:19:14.505 clat (usec): min=13, max=8781, avg=1094.07, stdev=535.08 00:19:14.505 lat (usec): min=80, max=8786, avg=1115.01, stdev=531.72 00:19:14.505 clat percentiles (usec): 00:19:14.505 | 1.00th=[ 219], 5.00th=[ 343], 10.00th=[ 474], 20.00th=[ 644], 00:19:14.505 | 30.00th=[ 791], 40.00th=[ 914], 50.00th=[ 1037], 60.00th=[ 1156], 00:19:14.505 | 70.00th=[ 1303], 80.00th=[ 1483], 90.00th=[ 1762], 95.00th=[ 2057], 00:19:14.505 | 99.00th=[ 2704], 99.50th=[ 2966], 99.90th=[ 3589], 99.95th=[ 3949], 00:19:14.505 | 99.99th=[ 7242] 00:19:14.505 bw ( KiB/s): min=144216, max=162904, per=100.00%, avg=153988.44, stdev=5900.31, samples=9 00:19:14.505 iops : min=36054, max=40726, avg=38497.11, stdev=1475.08, samples=9 00:19:14.505 lat (usec) : 20=0.01%, 50=0.01%, 100=0.01%, 250=1.81%, 500=9.41% 00:19:14.505 lat (usec) : 750=15.99%, 1000=20.07% 00:19:14.505 lat (msec) : 2=46.95%, 4=5.71%, 10=0.05% 00:19:14.505 cpu : usr=35.22%, sys=55.22%, ctx=6, majf=0, minf=774 00:19:14.505 IO depths : 1=0.3%, 2=0.9%, 4=3.1%, 8=9.4%, 16=24.8%, 32=59.6%, >=64=2.0% 00:19:14.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.505 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:19:14.505 issued rwts: total=0,191577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:14.505 00:19:14.505 Run status group 0 (all jobs): 00:19:14.505 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=748MiB (785MB), run=5001-5001msec 00:19:15.089 ----------------------------------------------------- 00:19:15.089 Suppressions used: 00:19:15.089 count bytes template 00:19:15.089 1 11 /usr/src/fio/parse.c 00:19:15.089 1 8 libtcmalloc_minimal.so 00:19:15.089 1 904 libcrypto.so 00:19:15.089 ----------------------------------------------------- 00:19:15.089 00:19:15.089 ************************************ 00:19:15.089 END TEST xnvme_fio_plugin 00:19:15.089 ************************************ 00:19:15.089 00:19:15.089 real 0m12.062s 00:19:15.089 user 0m4.747s 00:19:15.089 sys 0m5.975s 00:19:15.089 12:50:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.089 12:50:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:15.089 12:50:14 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:15.089 12:50:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:15.089 12:50:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.089 12:50:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.089 ************************************ 00:19:15.089 START TEST xnvme_rpc 00:19:15.089 ************************************ 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81785 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81785 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81785 ']' 00:19:15.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:15.089 12:50:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.089 [2024-12-05 12:50:14.868594] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:19:15.089 [2024-12-05 12:50:14.868974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81785 ] 00:19:15.361 [2024-12-05 12:50:15.029311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.361 [2024-12-05 12:50:15.054123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.938 xnvme_bdev 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:15.938 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81785 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81785 ']' 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81785 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81785 00:19:16.196 killing process with pid 81785 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81785' 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81785 00:19:16.196 12:50:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81785 00:19:16.457 ************************************ 00:19:16.457 END TEST xnvme_rpc 00:19:16.457 ************************************ 00:19:16.457 00:19:16.457 real 0m1.446s 00:19:16.457 user 0m1.554s 00:19:16.457 sys 0m0.380s 00:19:16.457 12:50:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.457 12:50:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.457 12:50:16 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:16.457 12:50:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:16.457 12:50:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.457 12:50:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.457 ************************************ 00:19:16.457 START TEST xnvme_bdevperf 00:19:16.457 ************************************ 00:19:16.457 12:50:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:16.457 12:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:16.457 12:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:19:16.457 12:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:16.457 12:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:16.457 12:50:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:16.457 12:50:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:16.457 12:50:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:16.717 { 00:19:16.717 "subsystems": [ 00:19:16.717 { 00:19:16.717 "subsystem": "bdev", 00:19:16.717 "config": [ 00:19:16.717 { 00:19:16.717 "params": { 00:19:16.717 "io_mechanism": "io_uring", 00:19:16.717 "conserve_cpu": false, 00:19:16.717 "filename": "/dev/nvme0n1", 00:19:16.717 "name": "xnvme_bdev" 00:19:16.717 }, 00:19:16.717 "method": "bdev_xnvme_create" 00:19:16.717 }, 00:19:16.717 { 00:19:16.717 "method": "bdev_wait_for_examine" 00:19:16.717 } 00:19:16.717 ] 00:19:16.717 } 00:19:16.717 ] 00:19:16.717 } 00:19:16.717 [2024-12-05 12:50:16.367150] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:19:16.717 [2024-12-05 12:50:16.367433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81837 ] 00:19:16.717 [2024-12-05 12:50:16.530091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.717 [2024-12-05 12:50:16.556278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.977 Running I/O for 5 seconds... 00:19:18.855 32754.00 IOPS, 127.95 MiB/s [2024-12-05T12:50:20.091Z] 34195.00 IOPS, 133.57 MiB/s [2024-12-05T12:50:20.660Z] 34525.33 IOPS, 134.86 MiB/s [2024-12-05T12:50:22.056Z] 34784.50 IOPS, 135.88 MiB/s [2024-12-05T12:50:22.056Z] 34657.00 IOPS, 135.38 MiB/s 00:19:22.204 Latency(us) 00:19:22.204 [2024-12-05T12:50:22.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.204 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:22.204 xnvme_bdev : 5.01 34632.17 135.28 0.00 0.00 1842.95 201.65 31658.93 00:19:22.204 [2024-12-05T12:50:22.056Z] =================================================================================================================== 00:19:22.204 [2024-12-05T12:50:22.056Z] Total : 34632.17 135.28 0.00 0.00 1842.95 201.65 31658.93 00:19:22.204 12:50:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:22.204 12:50:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:22.204 12:50:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:22.204 12:50:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:22.204 12:50:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:22.204 { 00:19:22.204 "subsystems": [ 00:19:22.204 { 00:19:22.204 "subsystem": "bdev", 00:19:22.204 "config": [ 00:19:22.204 { 00:19:22.204 "params": { 00:19:22.204 "io_mechanism": "io_uring", 00:19:22.204 "conserve_cpu": false, 00:19:22.204 "filename": "/dev/nvme0n1", 00:19:22.204 "name": "xnvme_bdev" 00:19:22.204 }, 00:19:22.204 "method": "bdev_xnvme_create" 00:19:22.204 }, 00:19:22.204 { 00:19:22.204 "method": "bdev_wait_for_examine" 00:19:22.204 } 00:19:22.204 ] 00:19:22.204 } 00:19:22.204 ] 00:19:22.204 } 00:19:22.204 [2024-12-05 12:50:21.899058] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:19:22.204 [2024-12-05 12:50:21.899189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81907 ] 00:19:22.465 [2024-12-05 12:50:22.059665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.465 [2024-12-05 12:50:22.087030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.466 Running I/O for 5 seconds... 00:19:24.351 8340.00 IOPS, 32.58 MiB/s [2024-12-05T12:50:25.630Z] 8713.00 IOPS, 34.04 MiB/s [2024-12-05T12:50:26.223Z] 9227.33 IOPS, 36.04 MiB/s [2024-12-05T12:50:27.222Z] 9484.75 IOPS, 37.05 MiB/s [2024-12-05T12:50:27.222Z] 9734.20 IOPS, 38.02 MiB/s 00:19:27.370 Latency(us) 00:19:27.370 [2024-12-05T12:50:27.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.370 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:27.370 xnvme_bdev : 5.01 9734.11 38.02 0.00 0.00 6566.57 56.71 60494.77 00:19:27.370 [2024-12-05T12:50:27.222Z] =================================================================================================================== 00:19:27.370 [2024-12-05T12:50:27.222Z] Total : 9734.11 38.02 0.00 0.00 6566.57 56.71 60494.77 00:19:27.632 00:19:27.632 real 0m11.066s 00:19:27.632 user 0m4.281s 00:19:27.632 sys 0m6.543s 00:19:27.632 ************************************ 00:19:27.632 END TEST xnvme_bdevperf 00:19:27.632 ************************************ 00:19:27.632 12:50:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.632 12:50:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:27.632 12:50:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:27.632 12:50:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:27.632 12:50:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.632 12:50:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:27.632 ************************************ 00:19:27.632 START TEST xnvme_fio_plugin 00:19:27.632 ************************************ 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:27.632 12:50:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:27.632 { 00:19:27.632 "subsystems": [ 00:19:27.632 { 00:19:27.632 "subsystem": "bdev", 00:19:27.632 "config": [ 00:19:27.632 { 00:19:27.632 "params": { 00:19:27.632 "io_mechanism": "io_uring", 00:19:27.632 "conserve_cpu": false, 00:19:27.632 "filename": "/dev/nvme0n1", 00:19:27.632 "name": "xnvme_bdev" 00:19:27.632 }, 00:19:27.632 "method": "bdev_xnvme_create" 00:19:27.632 }, 00:19:27.632 { 00:19:27.632 "method": "bdev_wait_for_examine" 00:19:27.632 } 00:19:27.632 ] 00:19:27.632 } 00:19:27.632 ] 00:19:27.632 } 00:19:27.893 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:27.893 fio-3.35 00:19:27.893 Starting 1 thread 00:19:33.181 00:19:33.181 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82015: Thu Dec 5 12:50:33 2024 00:19:33.181 read: IOPS=37.2k, BW=145MiB/s (152MB/s)(726MiB/5001msec) 00:19:33.181 slat (usec): min=2, max=150, avg= 4.16, stdev= 2.58 00:19:33.181 clat (usec): min=731, max=3497, avg=1552.95, stdev=319.65 00:19:33.181 lat (usec): min=735, max=3532, avg=1557.11, stdev=320.32 00:19:33.181 clat percentiles (usec): 00:19:33.181 | 1.00th=[ 930], 5.00th=[ 1057], 10.00th=[ 1156], 20.00th=[ 1287], 00:19:33.181 | 30.00th=[ 1385], 40.00th=[ 1450], 50.00th=[ 1532], 60.00th=[ 1614], 00:19:33.181 | 70.00th=[ 1696], 80.00th=[ 1811], 90.00th=[ 1975], 95.00th=[ 2114], 00:19:33.181 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2868], 99.95th=[ 3032], 00:19:33.181 | 99.99th=[ 3326] 00:19:33.181 bw ( KiB/s): min=136704, max=165888, per=100.00%, avg=148707.56, stdev=9792.16, samples=9 00:19:33.181 iops : min=34176, max=41472, avg=37176.89, stdev=2448.04, samples=9 00:19:33.181 lat (usec) : 750=0.01%, 1000=2.70% 00:19:33.181 lat (msec) : 2=88.61%, 4=8.68% 00:19:33.181 cpu : usr=32.56%, sys=66.28%, ctx=11, majf=0, minf=771 00:19:33.181 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:33.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.181 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:33.181 issued rwts: total=185920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.181 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:33.181 00:19:33.181 Run status group 0 (all jobs): 00:19:33.181 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=726MiB (762MB), run=5001-5001msec 00:19:33.750 ----------------------------------------------------- 00:19:33.751 Suppressions used: 00:19:33.751 count bytes template 00:19:33.751 1 11 /usr/src/fio/parse.c 00:19:33.751 1 8 libtcmalloc_minimal.so 00:19:33.751 1 904 libcrypto.so 00:19:33.751 ----------------------------------------------------- 00:19:33.751 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:33.751 12:50:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:33.751 { 00:19:33.751 "subsystems": [ 00:19:33.751 { 00:19:33.751 "subsystem": "bdev", 00:19:33.751 "config": [ 00:19:33.751 { 00:19:33.751 "params": { 00:19:33.751 "io_mechanism": "io_uring", 00:19:33.751 "conserve_cpu": false, 00:19:33.751 "filename": "/dev/nvme0n1", 00:19:33.751 "name": "xnvme_bdev" 00:19:33.751 }, 00:19:33.751 "method": "bdev_xnvme_create" 00:19:33.751 }, 00:19:33.751 { 00:19:33.751 "method": "bdev_wait_for_examine" 00:19:33.751 } 00:19:33.751 ] 00:19:33.751 } 00:19:33.751 ] 00:19:33.751 } 00:19:33.751 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:33.751 fio-3.35 00:19:33.751 Starting 1 thread 00:19:40.404 00:19:40.404 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82096: Thu Dec 5 12:50:38 2024 00:19:40.404 write: IOPS=35.5k, BW=139MiB/s (145MB/s)(694MiB/5002msec); 0 zone resets 00:19:40.404 slat (usec): min=2, max=174, avg= 4.47, stdev= 2.65 00:19:40.404 clat (usec): min=82, max=30092, avg=1626.85, stdev=832.04 00:19:40.404 lat (usec): min=85, max=30103, avg=1631.31, stdev=832.26 00:19:40.404 clat percentiles (usec): 00:19:40.404 | 1.00th=[ 906], 5.00th=[ 1037], 10.00th=[ 1123], 20.00th=[ 1254], 00:19:40.404 | 30.00th=[ 1385], 40.00th=[ 1483], 50.00th=[ 1598], 60.00th=[ 1696], 00:19:40.404 | 70.00th=[ 1795], 80.00th=[ 1909], 90.00th=[ 2073], 95.00th=[ 2212], 00:19:40.404 | 99.00th=[ 2573], 99.50th=[ 2769], 99.90th=[14746], 99.95th=[25035], 00:19:40.404 | 99.99th=[27919] 00:19:40.404 bw ( KiB/s): min=122192, max=165352, per=100.00%, avg=142705.78, stdev=15445.39, samples=9 00:19:40.404 iops : min=30548, max=41338, avg=35676.44, stdev=3861.35, samples=9 00:19:40.404 lat (usec) : 100=0.01%, 250=0.01%, 500=0.02%, 750=0.06%, 1000=3.40% 00:19:40.404 lat (msec) : 2=82.55%, 4=13.80%, 10=0.03%, 20=0.06%, 50=0.08% 00:19:40.404 cpu : usr=34.01%, sys=64.75%, ctx=16, majf=0, minf=772 00:19:40.404 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:19:40.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:40.404 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:40.404 issued rwts: total=0,177568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:40.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:40.404 00:19:40.404 Run status group 0 (all jobs): 00:19:40.405 WRITE: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=694MiB (727MB), run=5002-5002msec 00:19:40.405 ----------------------------------------------------- 00:19:40.405 Suppressions used: 00:19:40.405 count bytes template 00:19:40.405 1 11 /usr/src/fio/parse.c 00:19:40.405 1 8 libtcmalloc_minimal.so 00:19:40.405 1 904 libcrypto.so 00:19:40.405 ----------------------------------------------------- 00:19:40.405 00:19:40.405 00:19:40.405 real 0m11.955s 00:19:40.405 user 0m4.418s 00:19:40.405 sys 0m7.094s 00:19:40.405 12:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.405 12:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:40.405 ************************************ 00:19:40.405 END TEST xnvme_fio_plugin 00:19:40.405 ************************************ 00:19:40.405 12:50:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:40.405 12:50:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:40.405 12:50:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:40.405 12:50:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:40.405 12:50:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:40.405 12:50:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.405 12:50:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:40.405 ************************************ 00:19:40.405 START TEST xnvme_rpc 00:19:40.405 ************************************ 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:40.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82171 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82171 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82171 ']' 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.405 12:50:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.405 [2024-12-05 12:50:39.530355] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:19:40.405 [2024-12-05 12:50:39.530519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82171 ] 00:19:40.405 [2024-12-05 12:50:39.688142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.405 [2024-12-05 12:50:39.713319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.664 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.665 xnvme_bdev 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.665 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82171 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82171 ']' 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82171 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82171 00:19:40.926 killing process with pid 82171 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82171' 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82171 00:19:40.926 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82171 00:19:41.185 00:19:41.185 real 0m1.455s 00:19:41.185 user 0m1.559s 00:19:41.185 sys 0m0.385s 00:19:41.185 ************************************ 00:19:41.185 END TEST xnvme_rpc 00:19:41.185 ************************************ 00:19:41.185 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.185 12:50:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.185 12:50:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:41.185 12:50:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:41.185 12:50:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.185 12:50:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:41.185 ************************************ 00:19:41.185 START TEST xnvme_bdevperf 00:19:41.185 ************************************ 00:19:41.185 12:50:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:41.185 12:50:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:41.185 12:50:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:19:41.185 12:50:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:41.185 12:50:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:41.185 12:50:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:41.185 12:50:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:41.185 12:50:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:41.185 { 00:19:41.185 "subsystems": [ 00:19:41.185 { 00:19:41.185 "subsystem": "bdev", 00:19:41.185 "config": [ 00:19:41.185 { 00:19:41.185 "params": { 00:19:41.185 "io_mechanism": "io_uring", 00:19:41.185 "conserve_cpu": true, 00:19:41.185 "filename": "/dev/nvme0n1", 00:19:41.185 "name": "xnvme_bdev" 00:19:41.185 }, 00:19:41.185 "method": "bdev_xnvme_create" 00:19:41.185 }, 00:19:41.185 { 00:19:41.185 "method": "bdev_wait_for_examine" 00:19:41.185 } 00:19:41.185 ] 00:19:41.185 } 00:19:41.185 ] 00:19:41.185 } 00:19:41.444 [2024-12-05 12:50:41.038562] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:19:41.444 [2024-12-05 12:50:41.038703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82234 ] 00:19:41.444 [2024-12-05 12:50:41.198842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.444 [2024-12-05 12:50:41.225003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.703 Running I/O for 5 seconds... 00:19:43.580 26306.00 IOPS, 102.76 MiB/s [2024-12-05T12:50:44.370Z] 30505.00 IOPS, 119.16 MiB/s [2024-12-05T12:50:45.745Z] 32195.33 IOPS, 125.76 MiB/s [2024-12-05T12:50:46.683Z] 31891.50 IOPS, 124.58 MiB/s 00:19:46.831 Latency(us) 00:19:46.831 [2024-12-05T12:50:46.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.831 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:46.831 xnvme_bdev : 5.00 33057.50 129.13 0.00 0.00 1928.90 519.88 88725.66 00:19:46.831 [2024-12-05T12:50:46.683Z] =================================================================================================================== 00:19:46.831 [2024-12-05T12:50:46.683Z] Total : 33057.50 129.13 0.00 0.00 1928.90 519.88 88725.66 00:19:46.831 12:50:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:46.831 12:50:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:46.831 12:50:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:46.831 12:50:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:46.831 12:50:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:46.831 { 00:19:46.831 "subsystems": [ 00:19:46.831 { 00:19:46.831 "subsystem": "bdev", 00:19:46.831 "config": [ 00:19:46.831 { 00:19:46.831 "params": { 00:19:46.831 "io_mechanism": "io_uring", 00:19:46.831 "conserve_cpu": true, 00:19:46.831 "filename": "/dev/nvme0n1", 00:19:46.831 "name": "xnvme_bdev" 00:19:46.831 }, 00:19:46.831 "method": "bdev_xnvme_create" 00:19:46.831 }, 00:19:46.831 { 00:19:46.831 "method": "bdev_wait_for_examine" 00:19:46.831 } 00:19:46.831 ] 00:19:46.831 } 00:19:46.831 ] 00:19:46.831 } 00:19:46.831 [2024-12-05 12:50:46.566885] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:19:46.831 [2024-12-05 12:50:46.567010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82298 ] 00:19:47.091 [2024-12-05 12:50:46.727129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.091 [2024-12-05 12:50:46.751687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.091 Running I/O for 5 seconds... 00:19:49.431 6948.00 IOPS, 27.14 MiB/s [2024-12-05T12:50:49.854Z] 8019.00 IOPS, 31.32 MiB/s [2024-12-05T12:50:51.272Z] 10242.00 IOPS, 40.01 MiB/s [2024-12-05T12:50:51.858Z] 10871.50 IOPS, 42.47 MiB/s [2024-12-05T12:50:51.858Z] 11698.00 IOPS, 45.70 MiB/s 00:19:52.006 Latency(us) 00:19:52.006 [2024-12-05T12:50:51.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.006 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:52.006 xnvme_bdev : 5.01 11697.79 45.69 0.00 0.00 5463.47 50.22 774333.05 00:19:52.006 [2024-12-05T12:50:51.858Z] =================================================================================================================== 00:19:52.006 [2024-12-05T12:50:51.858Z] Total : 11697.79 45.69 0.00 0.00 5463.47 50.22 774333.05 00:19:52.266 00:19:52.266 real 0m11.054s 00:19:52.266 user 0m8.260s 00:19:52.266 sys 0m2.093s 00:19:52.266 ************************************ 00:19:52.266 END TEST xnvme_bdevperf 00:19:52.266 ************************************ 00:19:52.266 12:50:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.266 12:50:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:52.266 12:50:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:52.266 12:50:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:52.266 12:50:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.267 12:50:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:52.267 ************************************ 00:19:52.267 START TEST xnvme_fio_plugin 00:19:52.267 ************************************ 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:52.267 12:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:52.526 { 00:19:52.526 "subsystems": [ 00:19:52.526 { 00:19:52.526 "subsystem": "bdev", 00:19:52.526 "config": [ 00:19:52.526 { 00:19:52.526 "params": { 00:19:52.526 "io_mechanism": "io_uring", 00:19:52.526 "conserve_cpu": true, 00:19:52.526 "filename": "/dev/nvme0n1", 00:19:52.526 "name": "xnvme_bdev" 00:19:52.526 }, 00:19:52.526 "method": "bdev_xnvme_create" 00:19:52.526 }, 00:19:52.526 { 00:19:52.526 "method": "bdev_wait_for_examine" 00:19:52.526 } 00:19:52.526 ] 00:19:52.526 } 00:19:52.526 ] 00:19:52.526 } 00:19:52.526 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:52.526 fio-3.35 00:19:52.526 Starting 1 thread 00:19:59.102 00:19:59.102 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82401: Thu Dec 5 12:50:57 2024 00:19:59.102 read: IOPS=40.2k, BW=157MiB/s (164MB/s)(784MiB/5001msec) 00:19:59.102 slat (usec): min=2, max=116, avg= 4.12, stdev= 2.52 00:19:59.102 clat (usec): min=772, max=3019, avg=1430.94, stdev=285.05 00:19:59.102 lat (usec): min=775, max=3043, avg=1435.07, stdev=285.83 00:19:59.102 clat percentiles (usec): 00:19:59.102 | 1.00th=[ 922], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1188], 00:19:59.102 | 30.00th=[ 1270], 40.00th=[ 1319], 50.00th=[ 1385], 60.00th=[ 1467], 00:19:59.102 | 70.00th=[ 1549], 80.00th=[ 1647], 90.00th=[ 1795], 95.00th=[ 1942], 00:19:59.102 | 99.00th=[ 2311], 99.50th=[ 2474], 99.90th=[ 2737], 99.95th=[ 2835], 00:19:59.102 | 99.99th=[ 2933] 00:19:59.102 bw ( KiB/s): min=142336, max=167856, per=100.00%, avg=160744.78, stdev=8338.89, samples=9 00:19:59.102 iops : min=35584, max=41964, avg=40186.11, stdev=2084.82, samples=9 00:19:59.102 lat (usec) : 1000=3.35% 00:19:59.102 lat (msec) : 2=92.86%, 4=3.79% 00:19:59.102 cpu : usr=60.82%, sys=35.86%, ctx=12, majf=0, minf=771 00:19:59.102 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:59.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.102 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:59.102 issued rwts: total=200800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:59.102 00:19:59.102 Run status group 0 (all jobs): 00:19:59.102 READ: bw=157MiB/s (164MB/s), 157MiB/s-157MiB/s (164MB/s-164MB/s), io=784MiB (822MB), run=5001-5001msec 00:19:59.102 ----------------------------------------------------- 00:19:59.102 Suppressions used: 00:19:59.102 count bytes template 00:19:59.102 1 11 /usr/src/fio/parse.c 00:19:59.102 1 8 libtcmalloc_minimal.so 00:19:59.102 1 904 libcrypto.so 00:19:59.102 ----------------------------------------------------- 00:19:59.102 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:59.102 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:59.103 12:50:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:59.103 { 00:19:59.103 "subsystems": [ 00:19:59.103 { 00:19:59.103 "subsystem": "bdev", 00:19:59.103 "config": [ 00:19:59.103 { 00:19:59.103 "params": { 00:19:59.103 "io_mechanism": "io_uring", 00:19:59.103 "conserve_cpu": true, 00:19:59.103 "filename": "/dev/nvme0n1", 00:19:59.103 "name": "xnvme_bdev" 00:19:59.103 }, 00:19:59.103 "method": "bdev_xnvme_create" 00:19:59.103 }, 00:19:59.103 { 00:19:59.103 "method": "bdev_wait_for_examine" 00:19:59.103 } 00:19:59.103 ] 00:19:59.103 } 00:19:59.103 ] 00:19:59.103 } 00:19:59.103 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:59.103 fio-3.35 00:19:59.103 Starting 1 thread 00:20:04.504 00:20:04.504 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82487: Thu Dec 5 12:51:03 2024 00:20:04.504 write: IOPS=39.5k, BW=154MiB/s (162MB/s)(772MiB/5003msec); 0 zone resets 00:20:04.504 slat (usec): min=2, max=144, avg= 4.10, stdev= 2.20 00:20:04.504 clat (usec): min=262, max=10696, avg=1457.37, stdev=347.24 00:20:04.504 lat (usec): min=268, max=10699, avg=1461.47, stdev=347.75 00:20:04.504 clat percentiles (usec): 00:20:04.504 | 1.00th=[ 938], 5.00th=[ 1057], 10.00th=[ 1123], 20.00th=[ 1221], 00:20:04.504 | 30.00th=[ 1287], 40.00th=[ 1352], 50.00th=[ 1418], 60.00th=[ 1483], 00:20:04.504 | 70.00th=[ 1565], 80.00th=[ 1663], 90.00th=[ 1827], 95.00th=[ 1975], 00:20:04.504 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 5407], 99.95th=[ 7111], 00:20:04.504 | 99.99th=[ 9110] 00:20:04.504 bw ( KiB/s): min=151016, max=166392, per=100.00%, avg=159770.67, stdev=4584.88, samples=9 00:20:04.504 iops : min=37754, max=41598, avg=39942.67, stdev=1146.22, samples=9 00:20:04.504 lat (usec) : 500=0.01%, 750=0.02%, 1000=2.76% 00:20:04.504 lat (msec) : 2=92.74%, 4=4.33%, 10=0.14%, 20=0.01% 00:20:04.504 cpu : usr=62.61%, sys=33.93%, ctx=16, majf=0, minf=772 00:20:04.504 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:20:04.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.504 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:04.504 issued rwts: total=0,197616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:04.504 00:20:04.504 Run status group 0 (all jobs): 00:20:04.504 WRITE: bw=154MiB/s (162MB/s), 154MiB/s-154MiB/s (162MB/s-162MB/s), io=772MiB (809MB), run=5003-5003msec 00:20:04.504 ----------------------------------------------------- 00:20:04.504 Suppressions used: 00:20:04.504 count bytes template 00:20:04.504 1 11 /usr/src/fio/parse.c 00:20:04.504 1 8 libtcmalloc_minimal.so 00:20:04.504 1 904 libcrypto.so 00:20:04.504 ----------------------------------------------------- 00:20:04.504 00:20:04.504 ************************************ 00:20:04.504 END TEST xnvme_fio_plugin 00:20:04.504 ************************************ 00:20:04.504 00:20:04.504 real 0m12.024s 00:20:04.504 user 0m7.343s 00:20:04.504 sys 0m4.011s 00:20:04.504 12:51:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.504 12:51:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:04.504 12:51:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:04.504 12:51:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:04.504 12:51:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.504 12:51:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:04.504 ************************************ 00:20:04.504 START TEST xnvme_rpc 00:20:04.504 ************************************ 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82562 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82562 00:20:04.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82562 ']' 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.504 12:51:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:04.504 [2024-12-05 12:51:04.267178] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:04.504 [2024-12-05 12:51:04.267495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82562 ] 00:20:04.766 [2024-12-05 12:51:04.424909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.766 [2024-12-05 12:51:04.454381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.339 xnvme_bdev 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:05.339 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82562 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82562 ']' 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82562 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82562 00:20:05.601 killing process with pid 82562 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82562' 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82562 00:20:05.601 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82562 00:20:05.863 00:20:05.863 real 0m1.439s 00:20:05.863 user 0m1.518s 00:20:05.863 sys 0m0.405s 00:20:05.863 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.863 ************************************ 00:20:05.863 END TEST xnvme_rpc 00:20:05.863 ************************************ 00:20:05.863 12:51:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.863 12:51:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:05.863 12:51:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:05.863 12:51:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.863 12:51:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:05.863 ************************************ 00:20:05.863 START TEST xnvme_bdevperf 00:20:05.863 ************************************ 00:20:05.863 12:51:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:05.863 12:51:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:05.863 12:51:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:20:05.863 12:51:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:05.863 12:51:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:05.863 12:51:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:05.863 12:51:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:05.863 12:51:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.122 { 00:20:06.122 "subsystems": [ 00:20:06.122 { 00:20:06.122 "subsystem": "bdev", 00:20:06.122 "config": [ 00:20:06.122 { 00:20:06.122 "params": { 00:20:06.122 "io_mechanism": "io_uring_cmd", 00:20:06.122 "conserve_cpu": false, 00:20:06.122 "filename": "/dev/ng0n1", 00:20:06.122 "name": "xnvme_bdev" 00:20:06.123 }, 00:20:06.123 "method": "bdev_xnvme_create" 00:20:06.123 }, 00:20:06.123 { 00:20:06.123 "method": "bdev_wait_for_examine" 00:20:06.123 } 00:20:06.123 ] 00:20:06.123 } 00:20:06.123 ] 00:20:06.123 } 00:20:06.123 [2024-12-05 12:51:05.764941] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:06.123 [2024-12-05 12:51:05.765270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82620 ] 00:20:06.123 [2024-12-05 12:51:05.922445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.123 [2024-12-05 12:51:05.949651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.384 Running I/O for 5 seconds... 00:20:08.267 33322.00 IOPS, 130.16 MiB/s [2024-12-05T12:51:09.084Z] 34274.50 IOPS, 133.88 MiB/s [2024-12-05T12:51:10.468Z] 34599.67 IOPS, 135.15 MiB/s [2024-12-05T12:51:11.412Z] 35054.00 IOPS, 136.93 MiB/s [2024-12-05T12:51:11.412Z] 35496.20 IOPS, 138.66 MiB/s 00:20:11.560 Latency(us) 00:20:11.560 [2024-12-05T12:51:11.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.560 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:11.560 xnvme_bdev : 5.00 35478.73 138.59 0.00 0.00 1798.95 322.95 22584.71 00:20:11.560 [2024-12-05T12:51:11.412Z] =================================================================================================================== 00:20:11.560 [2024-12-05T12:51:11.412Z] Total : 35478.73 138.59 0.00 0.00 1798.95 322.95 22584.71 00:20:11.560 12:51:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:11.560 12:51:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:11.560 12:51:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:11.560 12:51:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:11.560 12:51:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:11.560 { 00:20:11.560 "subsystems": [ 00:20:11.560 { 00:20:11.560 "subsystem": "bdev", 00:20:11.560 "config": [ 00:20:11.560 { 00:20:11.560 "params": { 00:20:11.560 "io_mechanism": "io_uring_cmd", 00:20:11.560 "conserve_cpu": false, 00:20:11.560 "filename": "/dev/ng0n1", 00:20:11.560 "name": "xnvme_bdev" 00:20:11.560 }, 00:20:11.560 "method": "bdev_xnvme_create" 00:20:11.560 }, 00:20:11.560 { 00:20:11.560 "method": "bdev_wait_for_examine" 00:20:11.560 } 00:20:11.560 ] 00:20:11.560 } 00:20:11.560 ] 00:20:11.560 } 00:20:11.560 [2024-12-05 12:51:11.290183] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:11.560 [2024-12-05 12:51:11.290442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82684 ] 00:20:11.821 [2024-12-05 12:51:11.449841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.821 [2024-12-05 12:51:11.475546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.821 Running I/O for 5 seconds... 00:20:14.146 21558.00 IOPS, 84.21 MiB/s [2024-12-05T12:51:14.576Z] 21553.00 IOPS, 84.19 MiB/s [2024-12-05T12:51:16.008Z] 22014.67 IOPS, 85.99 MiB/s [2024-12-05T12:51:16.578Z] 22174.75 IOPS, 86.62 MiB/s [2024-12-05T12:51:16.840Z] 22753.40 IOPS, 88.88 MiB/s 00:20:16.988 Latency(us) 00:20:16.988 [2024-12-05T12:51:16.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.988 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:16.988 xnvme_bdev : 5.04 22598.59 88.28 0.00 0.00 2826.90 56.71 170998.55 00:20:16.988 [2024-12-05T12:51:16.840Z] =================================================================================================================== 00:20:16.988 [2024-12-05T12:51:16.840Z] Total : 22598.59 88.28 0.00 0.00 2826.90 56.71 170998.55 00:20:16.988 12:51:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:16.988 12:51:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:16.988 12:51:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:16.988 12:51:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:16.988 12:51:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:16.988 { 00:20:16.988 "subsystems": [ 00:20:16.988 { 00:20:16.988 "subsystem": "bdev", 00:20:16.988 "config": [ 00:20:16.988 { 00:20:16.988 "params": { 00:20:16.988 "io_mechanism": "io_uring_cmd", 00:20:16.988 "conserve_cpu": false, 00:20:16.988 "filename": "/dev/ng0n1", 00:20:16.988 "name": "xnvme_bdev" 00:20:16.988 }, 00:20:16.988 "method": "bdev_xnvme_create" 00:20:16.988 }, 00:20:16.988 { 00:20:16.988 "method": "bdev_wait_for_examine" 00:20:16.988 } 00:20:16.988 ] 00:20:16.988 } 00:20:16.988 ] 00:20:16.988 } 00:20:17.249 [2024-12-05 12:51:16.858281] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:17.249 [2024-12-05 12:51:16.858412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82748 ] 00:20:17.249 [2024-12-05 12:51:17.018036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.249 [2024-12-05 12:51:17.043547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.508 Running I/O for 5 seconds... 00:20:19.833 61248.00 IOPS, 239.25 MiB/s [2024-12-05T12:51:20.284Z] 65696.00 IOPS, 256.62 MiB/s [2024-12-05T12:51:21.665Z] 64341.33 IOPS, 251.33 MiB/s [2024-12-05T12:51:22.609Z] 63312.00 IOPS, 247.31 MiB/s 00:20:22.757 Latency(us) 00:20:22.757 [2024-12-05T12:51:22.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.757 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:22.757 xnvme_bdev : 5.00 64193.87 250.76 0.00 0.00 993.30 444.26 3150.77 00:20:22.757 [2024-12-05T12:51:22.609Z] =================================================================================================================== 00:20:22.757 [2024-12-05T12:51:22.609Z] Total : 64193.87 250.76 0.00 0.00 993.30 444.26 3150.77 00:20:22.757 12:51:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:22.757 12:51:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:20:22.757 12:51:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:22.757 12:51:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:22.757 12:51:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:22.757 { 00:20:22.757 "subsystems": [ 00:20:22.757 { 00:20:22.757 "subsystem": "bdev", 00:20:22.757 "config": [ 00:20:22.757 { 00:20:22.757 "params": { 00:20:22.757 "io_mechanism": "io_uring_cmd", 00:20:22.757 "conserve_cpu": false, 00:20:22.757 "filename": "/dev/ng0n1", 00:20:22.757 "name": "xnvme_bdev" 00:20:22.757 }, 00:20:22.757 "method": "bdev_xnvme_create" 00:20:22.757 }, 00:20:22.757 { 00:20:22.757 "method": "bdev_wait_for_examine" 00:20:22.757 } 00:20:22.757 ] 00:20:22.757 } 00:20:22.757 ] 00:20:22.757 } 00:20:22.757 [2024-12-05 12:51:22.488232] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:22.757 [2024-12-05 12:51:22.488357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82820 ] 00:20:23.019 [2024-12-05 12:51:22.650357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.019 [2024-12-05 12:51:22.675147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.019 Running I/O for 5 seconds... 00:20:24.952 287.00 IOPS, 1.12 MiB/s [2024-12-05T12:51:26.190Z] 279.50 IOPS, 1.09 MiB/s [2024-12-05T12:51:27.126Z] 260.67 IOPS, 1.02 MiB/s [2024-12-05T12:51:28.069Z] 238.75 IOPS, 0.93 MiB/s [2024-12-05T12:51:28.069Z] 263.00 IOPS, 1.03 MiB/s 00:20:28.218 Latency(us) 00:20:28.218 [2024-12-05T12:51:28.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.218 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:20:28.218 xnvme_bdev : 5.15 267.77 1.05 0.00 0.00 235573.53 88.22 1122782.92 00:20:28.218 [2024-12-05T12:51:28.070Z] =================================================================================================================== 00:20:28.218 [2024-12-05T12:51:28.070Z] Total : 267.77 1.05 0.00 0.00 235573.53 88.22 1122782.92 00:20:28.480 00:20:28.480 real 0m22.402s 00:20:28.480 user 0m11.536s 00:20:28.480 sys 0m10.410s 00:20:28.480 12:51:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.480 12:51:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:28.480 ************************************ 00:20:28.480 END TEST xnvme_bdevperf 00:20:28.480 ************************************ 00:20:28.480 12:51:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:28.480 12:51:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.480 12:51:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.480 12:51:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:28.480 ************************************ 00:20:28.480 START TEST xnvme_fio_plugin 00:20:28.480 ************************************ 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:28.480 12:51:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:28.480 { 00:20:28.480 "subsystems": [ 00:20:28.480 { 00:20:28.480 "subsystem": "bdev", 00:20:28.480 "config": [ 00:20:28.480 { 00:20:28.480 "params": { 00:20:28.480 "io_mechanism": "io_uring_cmd", 00:20:28.480 "conserve_cpu": false, 00:20:28.480 "filename": "/dev/ng0n1", 00:20:28.480 "name": "xnvme_bdev" 00:20:28.480 }, 00:20:28.480 "method": "bdev_xnvme_create" 00:20:28.480 }, 00:20:28.480 { 00:20:28.480 "method": "bdev_wait_for_examine" 00:20:28.480 } 00:20:28.480 ] 00:20:28.480 } 00:20:28.480 ] 00:20:28.480 } 00:20:28.739 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:28.739 fio-3.35 00:20:28.739 Starting 1 thread 00:20:34.045 00:20:34.045 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82923: Thu Dec 5 12:51:33 2024 00:20:34.045 read: IOPS=36.1k, BW=141MiB/s (148MB/s)(705MiB/5001msec) 00:20:34.045 slat (nsec): min=2882, max=96884, avg=4630.13, stdev=2651.60 00:20:34.045 clat (usec): min=723, max=3657, avg=1585.52, stdev=368.35 00:20:34.045 lat (usec): min=727, max=3684, avg=1590.15, stdev=368.77 00:20:34.045 clat percentiles (usec): 00:20:34.045 | 1.00th=[ 922], 5.00th=[ 1045], 10.00th=[ 1139], 20.00th=[ 1270], 00:20:34.045 | 30.00th=[ 1369], 40.00th=[ 1450], 50.00th=[ 1549], 60.00th=[ 1647], 00:20:34.045 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2057], 95.00th=[ 2212], 00:20:34.045 | 99.00th=[ 2671], 99.50th=[ 2835], 99.90th=[ 3228], 99.95th=[ 3326], 00:20:34.045 | 99.99th=[ 3523] 00:20:34.045 bw ( KiB/s): min=134898, max=154112, per=100.00%, avg=144804.67, stdev=6943.94, samples=9 00:20:34.045 iops : min=33724, max=38528, avg=36201.11, stdev=1736.07, samples=9 00:20:34.045 lat (usec) : 750=0.01%, 1000=3.01% 00:20:34.045 lat (msec) : 2=84.30%, 4=12.69% 00:20:34.045 cpu : usr=39.96%, sys=58.84%, ctx=6, majf=0, minf=771 00:20:34.045 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:34.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.045 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:34.045 issued rwts: total=180591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.045 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:34.045 00:20:34.045 Run status group 0 (all jobs): 00:20:34.045 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=705MiB (740MB), run=5001-5001msec 00:20:34.305 ----------------------------------------------------- 00:20:34.305 Suppressions used: 00:20:34.305 count bytes template 00:20:34.305 1 11 /usr/src/fio/parse.c 00:20:34.305 1 8 libtcmalloc_minimal.so 00:20:34.305 1 904 libcrypto.so 00:20:34.305 ----------------------------------------------------- 00:20:34.305 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.565 12:51:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:34.565 { 00:20:34.565 "subsystems": [ 00:20:34.565 { 00:20:34.565 "subsystem": "bdev", 00:20:34.565 "config": [ 00:20:34.565 { 00:20:34.565 "params": { 00:20:34.565 "io_mechanism": "io_uring_cmd", 00:20:34.565 "conserve_cpu": false, 00:20:34.565 "filename": "/dev/ng0n1", 00:20:34.565 "name": "xnvme_bdev" 00:20:34.565 }, 00:20:34.565 "method": "bdev_xnvme_create" 00:20:34.565 }, 00:20:34.565 { 00:20:34.565 "method": "bdev_wait_for_examine" 00:20:34.565 } 00:20:34.565 ] 00:20:34.565 } 00:20:34.565 ] 00:20:34.565 } 00:20:34.565 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:34.565 fio-3.35 00:20:34.565 Starting 1 thread 00:20:41.170 00:20:41.170 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83008: Thu Dec 5 12:51:39 2024 00:20:41.170 write: IOPS=29.9k, BW=117MiB/s (123MB/s)(585MiB/5001msec); 0 zone resets 00:20:41.170 slat (usec): min=2, max=129, avg= 4.08, stdev= 2.17 00:20:41.170 clat (usec): min=62, max=369209, avg=1984.49, stdev=9376.29 00:20:41.170 lat (usec): min=66, max=369212, avg=1988.58, stdev=9376.27 00:20:41.170 clat percentiles (usec): 00:20:41.170 | 1.00th=[ 742], 5.00th=[ 922], 10.00th=[ 1004], 20.00th=[ 1123], 00:20:41.170 | 30.00th=[ 1237], 40.00th=[ 1336], 50.00th=[ 1450], 60.00th=[ 1549], 00:20:41.170 | 70.00th=[ 1663], 80.00th=[ 1811], 90.00th=[ 2024], 95.00th=[ 2245], 00:20:41.170 | 99.00th=[ 5342], 99.50th=[ 8586], 99.90th=[177210], 99.95th=[187696], 00:20:41.170 | 99.99th=[367002] 00:20:41.170 bw ( KiB/s): min= 7192, max=176952, per=96.28%, avg=115303.11, stdev=58502.60, samples=9 00:20:41.170 iops : min= 1798, max=44238, avg=28825.78, stdev=14625.65, samples=9 00:20:41.170 lat (usec) : 100=0.01%, 250=0.03%, 500=0.24%, 750=0.78%, 1000=8.87% 00:20:41.170 lat (msec) : 2=79.34%, 4=9.30%, 10=1.04%, 20=0.07%, 50=0.04% 00:20:41.170 lat (msec) : 100=0.05%, 250=0.19%, 500=0.03% 00:20:41.170 cpu : usr=37.46%, sys=61.70%, ctx=12, majf=0, minf=772 00:20:41.170 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.5%, 16=24.2%, 32=52.7%, >=64=1.9% 00:20:41.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.170 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:41.170 issued rwts: total=0,149728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.170 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:41.170 00:20:41.170 Run status group 0 (all jobs): 00:20:41.170 WRITE: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=585MiB (613MB), run=5001-5001msec 00:20:41.170 ----------------------------------------------------- 00:20:41.170 Suppressions used: 00:20:41.170 count bytes template 00:20:41.170 1 11 /usr/src/fio/parse.c 00:20:41.170 1 8 libtcmalloc_minimal.so 00:20:41.170 1 904 libcrypto.so 00:20:41.170 ----------------------------------------------------- 00:20:41.170 00:20:41.170 ************************************ 00:20:41.170 END TEST xnvme_fio_plugin 00:20:41.170 ************************************ 00:20:41.170 00:20:41.170 real 0m11.990s 00:20:41.170 user 0m5.013s 00:20:41.170 sys 0m6.544s 00:20:41.170 12:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.170 12:51:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:41.170 12:51:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:41.170 12:51:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:41.170 12:51:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:41.170 12:51:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:41.170 12:51:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.170 12:51:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.170 12:51:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:41.170 ************************************ 00:20:41.170 START TEST xnvme_rpc 00:20:41.170 ************************************ 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:41.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83082 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83082 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83082 ']' 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.170 12:51:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.170 [2024-12-05 12:51:40.293832] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:41.170 [2024-12-05 12:51:40.294637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83082 ] 00:20:41.170 [2024-12-05 12:51:40.455134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.170 [2024-12-05 12:51:40.480818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.432 xnvme_bdev 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.432 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.691 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:41.691 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:41.691 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.691 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.691 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.691 12:51:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83082 00:20:41.691 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83082 ']' 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83082 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83082 00:20:41.692 killing process with pid 83082 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83082' 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83082 00:20:41.692 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83082 00:20:41.953 ************************************ 00:20:41.953 END TEST xnvme_rpc 00:20:41.953 ************************************ 00:20:41.953 00:20:41.953 real 0m1.428s 00:20:41.953 user 0m1.523s 00:20:41.953 sys 0m0.379s 00:20:41.953 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.953 12:51:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.953 12:51:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:41.953 12:51:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.953 12:51:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.953 12:51:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:41.953 ************************************ 00:20:41.953 START TEST xnvme_bdevperf 00:20:41.953 ************************************ 00:20:41.953 12:51:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:41.953 12:51:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:41.953 12:51:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:20:41.953 12:51:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:41.953 12:51:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:41.953 12:51:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:41.953 12:51:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:41.953 12:51:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:41.953 { 00:20:41.953 "subsystems": [ 00:20:41.953 { 00:20:41.953 "subsystem": "bdev", 00:20:41.953 "config": [ 00:20:41.953 { 00:20:41.953 "params": { 00:20:41.953 "io_mechanism": "io_uring_cmd", 00:20:41.953 "conserve_cpu": true, 00:20:41.953 "filename": "/dev/ng0n1", 00:20:41.953 "name": "xnvme_bdev" 00:20:41.953 }, 00:20:41.953 "method": "bdev_xnvme_create" 00:20:41.953 }, 00:20:41.953 { 00:20:41.953 "method": "bdev_wait_for_examine" 00:20:41.953 } 00:20:41.953 ] 00:20:41.953 } 00:20:41.953 ] 00:20:41.953 } 00:20:41.953 [2024-12-05 12:51:41.769760] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:41.953 [2024-12-05 12:51:41.769911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83140 ] 00:20:42.214 [2024-12-05 12:51:41.922024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.214 [2024-12-05 12:51:41.946604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.214 Running I/O for 5 seconds... 00:20:44.535 37744.00 IOPS, 147.44 MiB/s [2024-12-05T12:51:45.324Z] 37294.50 IOPS, 145.68 MiB/s [2024-12-05T12:51:46.266Z] 37046.33 IOPS, 144.71 MiB/s [2024-12-05T12:51:47.244Z] 37246.50 IOPS, 145.49 MiB/s [2024-12-05T12:51:47.244Z] 37169.20 IOPS, 145.19 MiB/s 00:20:47.392 Latency(us) 00:20:47.392 [2024-12-05T12:51:47.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.392 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:47.392 xnvme_bdev : 5.00 37147.29 145.11 0.00 0.00 1718.27 129.18 145187.45 00:20:47.392 [2024-12-05T12:51:47.244Z] =================================================================================================================== 00:20:47.392 [2024-12-05T12:51:47.244Z] Total : 37147.29 145.11 0.00 0.00 1718.27 129.18 145187.45 00:20:47.392 12:51:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:47.392 12:51:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:47.392 12:51:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:47.392 12:51:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:47.392 12:51:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:47.653 { 00:20:47.653 "subsystems": [ 00:20:47.653 { 00:20:47.653 "subsystem": "bdev", 00:20:47.653 "config": [ 00:20:47.653 { 00:20:47.653 "params": { 00:20:47.653 "io_mechanism": "io_uring_cmd", 00:20:47.653 "conserve_cpu": true, 00:20:47.653 "filename": "/dev/ng0n1", 00:20:47.653 "name": "xnvme_bdev" 00:20:47.653 }, 00:20:47.653 "method": "bdev_xnvme_create" 00:20:47.653 }, 00:20:47.653 { 00:20:47.653 "method": "bdev_wait_for_examine" 00:20:47.653 } 00:20:47.653 ] 00:20:47.653 } 00:20:47.653 ] 00:20:47.653 } 00:20:47.653 [2024-12-05 12:51:47.283586] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:47.653 [2024-12-05 12:51:47.283715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83203 ] 00:20:47.653 [2024-12-05 12:51:47.443636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.653 [2024-12-05 12:51:47.469087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.913 Running I/O for 5 seconds... 00:20:49.795 39732.00 IOPS, 155.20 MiB/s [2024-12-05T12:51:50.589Z] 39337.50 IOPS, 153.66 MiB/s [2024-12-05T12:51:51.649Z] 39274.00 IOPS, 153.41 MiB/s [2024-12-05T12:51:52.588Z] 37605.75 IOPS, 146.90 MiB/s 00:20:52.736 Latency(us) 00:20:52.736 [2024-12-05T12:51:52.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.736 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:52.736 xnvme_bdev : 5.00 37181.24 145.24 0.00 0.00 1716.64 62.23 13913.80 00:20:52.736 [2024-12-05T12:51:52.588Z] =================================================================================================================== 00:20:52.736 [2024-12-05T12:51:52.588Z] Total : 37181.24 145.24 0.00 0.00 1716.64 62.23 13913.80 00:20:52.997 12:51:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:52.997 12:51:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:52.997 12:51:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:52.997 12:51:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:52.997 12:51:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:52.997 { 00:20:52.997 "subsystems": [ 00:20:52.997 { 00:20:52.997 "subsystem": "bdev", 00:20:52.997 "config": [ 00:20:52.997 { 00:20:52.997 "params": { 00:20:52.997 "io_mechanism": "io_uring_cmd", 00:20:52.997 "conserve_cpu": true, 00:20:52.997 "filename": "/dev/ng0n1", 00:20:52.997 "name": "xnvme_bdev" 00:20:52.997 }, 00:20:52.997 "method": "bdev_xnvme_create" 00:20:52.997 }, 00:20:52.997 { 00:20:52.997 "method": "bdev_wait_for_examine" 00:20:52.997 } 00:20:52.997 ] 00:20:52.997 } 00:20:52.997 ] 00:20:52.997 } 00:20:52.997 [2024-12-05 12:51:52.808689] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:52.997 [2024-12-05 12:51:52.808841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83272 ] 00:20:53.258 [2024-12-05 12:51:52.970959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.258 [2024-12-05 12:51:52.995720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.258 Running I/O for 5 seconds... 00:20:55.581 80512.00 IOPS, 314.50 MiB/s [2024-12-05T12:51:56.375Z] 80832.00 IOPS, 315.75 MiB/s [2024-12-05T12:51:57.317Z] 78592.00 IOPS, 307.00 MiB/s [2024-12-05T12:51:58.257Z] 74032.00 IOPS, 289.19 MiB/s [2024-12-05T12:51:58.257Z] 73715.20 IOPS, 287.95 MiB/s 00:20:58.405 Latency(us) 00:20:58.405 [2024-12-05T12:51:58.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.405 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:58.405 xnvme_bdev : 5.00 73691.36 287.86 0.00 0.00 864.84 384.39 3806.13 00:20:58.405 [2024-12-05T12:51:58.257Z] =================================================================================================================== 00:20:58.405 [2024-12-05T12:51:58.257Z] Total : 73691.36 287.86 0.00 0.00 864.84 384.39 3806.13 00:20:58.665 12:51:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:58.665 12:51:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:20:58.665 12:51:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:58.665 12:51:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:58.665 12:51:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:58.665 { 00:20:58.665 "subsystems": [ 00:20:58.665 { 00:20:58.665 "subsystem": "bdev", 00:20:58.665 "config": [ 00:20:58.665 { 00:20:58.665 "params": { 00:20:58.665 "io_mechanism": "io_uring_cmd", 00:20:58.665 "conserve_cpu": true, 00:20:58.665 "filename": "/dev/ng0n1", 00:20:58.665 "name": "xnvme_bdev" 00:20:58.665 }, 00:20:58.665 "method": "bdev_xnvme_create" 00:20:58.665 }, 00:20:58.665 { 00:20:58.665 "method": "bdev_wait_for_examine" 00:20:58.665 } 00:20:58.665 ] 00:20:58.665 } 00:20:58.665 ] 00:20:58.665 } 00:20:58.665 [2024-12-05 12:51:58.378555] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:20:58.665 [2024-12-05 12:51:58.378846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83335 ] 00:20:58.926 [2024-12-05 12:51:58.563886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.926 [2024-12-05 12:51:58.589829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.926 Running I/O for 5 seconds... 00:21:01.246 10098.00 IOPS, 39.45 MiB/s [2024-12-05T12:52:01.716Z] 14853.50 IOPS, 58.02 MiB/s [2024-12-05T12:52:03.102Z] 14050.00 IOPS, 54.88 MiB/s [2024-12-05T12:52:04.045Z] 14503.25 IOPS, 56.65 MiB/s [2024-12-05T12:52:04.045Z] 17170.40 IOPS, 67.07 MiB/s 00:21:04.193 Latency(us) 00:21:04.193 [2024-12-05T12:52:04.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.193 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:21:04.193 xnvme_bdev : 5.12 16769.59 65.51 0.00 0.00 3767.10 60.65 496863.70 00:21:04.193 [2024-12-05T12:52:04.045Z] =================================================================================================================== 00:21:04.193 [2024-12-05T12:52:04.045Z] Total : 16769.59 65.51 0.00 0.00 3767.10 60.65 496863.70 00:21:04.193 00:21:04.193 real 0m22.311s 00:21:04.193 user 0m16.416s 00:21:04.193 sys 0m4.627s 00:21:04.193 12:52:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.193 ************************************ 00:21:04.193 END TEST xnvme_bdevperf 00:21:04.193 ************************************ 00:21:04.193 12:52:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:04.453 12:52:04 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:04.453 12:52:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:04.453 12:52:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.454 12:52:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:04.454 ************************************ 00:21:04.454 START TEST xnvme_fio_plugin 00:21:04.454 ************************************ 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:04.454 { 00:21:04.454 "subsystems": [ 00:21:04.454 { 00:21:04.454 "subsystem": "bdev", 00:21:04.454 "config": [ 00:21:04.454 { 00:21:04.454 "params": { 00:21:04.454 "io_mechanism": "io_uring_cmd", 00:21:04.454 "conserve_cpu": true, 00:21:04.454 "filename": "/dev/ng0n1", 00:21:04.454 "name": "xnvme_bdev" 00:21:04.454 }, 00:21:04.454 "method": "bdev_xnvme_create" 00:21:04.454 }, 00:21:04.454 { 00:21:04.454 "method": "bdev_wait_for_examine" 00:21:04.454 } 00:21:04.454 ] 00:21:04.454 } 00:21:04.454 ] 00:21:04.454 } 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:04.454 12:52:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.454 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:04.454 fio-3.35 00:21:04.454 Starting 1 thread 00:21:11.033 00:21:11.033 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83443: Thu Dec 5 12:52:09 2024 00:21:11.033 read: IOPS=41.5k, BW=162MiB/s (170MB/s)(811MiB/5001msec) 00:21:11.033 slat (usec): min=2, max=130, avg= 3.97, stdev= 2.52 00:21:11.033 clat (usec): min=325, max=28412, avg=1384.91, stdev=298.01 00:21:11.033 lat (usec): min=328, max=28415, avg=1388.89, stdev=298.81 00:21:11.033 clat percentiles (usec): 00:21:11.033 | 1.00th=[ 848], 5.00th=[ 979], 10.00th=[ 1057], 20.00th=[ 1156], 00:21:11.033 | 30.00th=[ 1221], 40.00th=[ 1287], 50.00th=[ 1352], 60.00th=[ 1401], 00:21:11.033 | 70.00th=[ 1483], 80.00th=[ 1598], 90.00th=[ 1762], 95.00th=[ 1909], 00:21:11.033 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2933], 99.95th=[ 3195], 00:21:11.033 | 99.99th=[ 3785] 00:21:11.033 bw ( KiB/s): min=150016, max=175616, per=99.96%, avg=166058.67, stdev=8215.96, samples=9 00:21:11.033 iops : min=37504, max=43904, avg=41514.67, stdev=2053.99, samples=9 00:21:11.033 lat (usec) : 500=0.01%, 750=0.12%, 1000=6.08% 00:21:11.033 lat (msec) : 2=90.45%, 4=3.34%, 10=0.01%, 50=0.01% 00:21:11.033 cpu : usr=67.60%, sys=29.92%, ctx=36, majf=0, minf=771 00:21:11.033 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:21:11.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.033 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:21:11.033 issued rwts: total=207703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.033 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:11.033 00:21:11.033 Run status group 0 (all jobs): 00:21:11.033 READ: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=811MiB (851MB), run=5001-5001msec 00:21:11.033 ----------------------------------------------------- 00:21:11.033 Suppressions used: 00:21:11.033 count bytes template 00:21:11.033 1 11 /usr/src/fio/parse.c 00:21:11.033 1 8 libtcmalloc_minimal.so 00:21:11.033 1 904 libcrypto.so 00:21:11.033 ----------------------------------------------------- 00:21:11.033 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:11.033 12:52:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:11.033 { 00:21:11.033 "subsystems": [ 00:21:11.033 { 00:21:11.033 "subsystem": "bdev", 00:21:11.033 "config": [ 00:21:11.033 { 00:21:11.033 "params": { 00:21:11.033 "io_mechanism": "io_uring_cmd", 00:21:11.033 "conserve_cpu": true, 00:21:11.033 "filename": "/dev/ng0n1", 00:21:11.033 "name": "xnvme_bdev" 00:21:11.033 }, 00:21:11.033 "method": "bdev_xnvme_create" 00:21:11.033 }, 00:21:11.033 { 00:21:11.033 "method": "bdev_wait_for_examine" 00:21:11.033 } 00:21:11.033 ] 00:21:11.033 } 00:21:11.033 ] 00:21:11.033 } 00:21:11.033 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:11.033 fio-3.35 00:21:11.033 Starting 1 thread 00:21:16.499 00:21:16.499 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83523: Thu Dec 5 12:52:15 2024 00:21:16.499 write: IOPS=16.6k, BW=64.8MiB/s (67.9MB/s)(324MiB/5001msec); 0 zone resets 00:21:16.499 slat (usec): min=2, max=389, avg= 5.16, stdev= 4.86 00:21:16.499 clat (usec): min=42, max=162311, avg=3711.90, stdev=7556.93 00:21:16.499 lat (usec): min=46, max=162329, avg=3717.05, stdev=7556.99 00:21:16.499 clat percentiles (usec): 00:21:16.499 | 1.00th=[ 155], 5.00th=[ 469], 10.00th=[ 799], 20.00th=[ 1123], 00:21:16.499 | 30.00th=[ 1287], 40.00th=[ 1401], 50.00th=[ 1532], 60.00th=[ 1696], 00:21:16.499 | 70.00th=[ 1958], 80.00th=[ 3326], 90.00th=[ 12518], 95.00th=[ 15139], 00:21:16.499 | 99.00th=[ 19006], 99.50th=[ 21103], 99.90th=[147850], 99.95th=[154141], 00:21:16.499 | 99.99th=[160433] 00:21:16.499 bw ( KiB/s): min=33784, max=148440, per=100.00%, avg=67199.11, stdev=44653.86, samples=9 00:21:16.499 iops : min= 8446, max=37110, avg=16799.78, stdev=11163.46, samples=9 00:21:16.499 lat (usec) : 50=0.01%, 100=0.27%, 250=1.49%, 500=3.72%, 750=3.24% 00:21:16.499 lat (usec) : 1000=6.98% 00:21:16.499 lat (msec) : 2=55.65%, 4=9.70%, 10=5.55%, 20=12.70%, 50=0.47% 00:21:16.499 lat (msec) : 100=0.08%, 250=0.15% 00:21:16.499 cpu : usr=82.36%, sys=12.56%, ctx=9, majf=0, minf=772 00:21:16.499 IO depths : 1=1.0%, 2=1.9%, 4=3.9%, 8=7.9%, 16=16.3%, 32=62.1%, >=64=7.0% 00:21:16.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.499 complete : 0=0.0%, 4=96.6%, 8=1.2%, 16=0.9%, 32=0.3%, 64=1.0%, >=64=0.0% 00:21:16.499 issued rwts: total=0,82924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.499 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:16.499 00:21:16.499 Run status group 0 (all jobs): 00:21:16.499 WRITE: bw=64.8MiB/s (67.9MB/s), 64.8MiB/s-64.8MiB/s (67.9MB/s-67.9MB/s), io=324MiB (340MB), run=5001-5001msec 00:21:16.499 ----------------------------------------------------- 00:21:16.499 Suppressions used: 00:21:16.499 count bytes template 00:21:16.499 1 11 /usr/src/fio/parse.c 00:21:16.499 1 8 libtcmalloc_minimal.so 00:21:16.499 1 904 libcrypto.so 00:21:16.499 ----------------------------------------------------- 00:21:16.499 00:21:16.499 00:21:16.499 real 0m12.004s 00:21:16.499 user 0m8.667s 00:21:16.499 sys 0m2.627s 00:21:16.499 12:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.499 ************************************ 00:21:16.499 END TEST xnvme_fio_plugin 00:21:16.499 ************************************ 00:21:16.499 12:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:16.499 Process with pid 83082 is not found 00:21:16.499 12:52:16 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 83082 00:21:16.499 12:52:16 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83082 ']' 00:21:16.499 12:52:16 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 83082 00:21:16.499 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83082) - No such process 00:21:16.499 12:52:16 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 83082 is not found' 00:21:16.499 00:21:16.499 real 2m57.681s 00:21:16.499 user 1m32.208s 00:21:16.499 sys 1m10.960s 00:21:16.758 ************************************ 00:21:16.758 END TEST nvme_xnvme 00:21:16.758 ************************************ 00:21:16.758 12:52:16 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.758 12:52:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:16.758 12:52:16 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:21:16.758 12:52:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.758 12:52:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.758 12:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:16.758 ************************************ 00:21:16.758 START TEST blockdev_xnvme 00:21:16.758 ************************************ 00:21:16.758 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:21:16.758 * Looking for test storage... 00:21:16.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:16.758 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:16.758 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.759 12:52:16 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.759 --rc genhtml_branch_coverage=1 00:21:16.759 --rc genhtml_function_coverage=1 00:21:16.759 --rc genhtml_legend=1 00:21:16.759 --rc geninfo_all_blocks=1 00:21:16.759 --rc geninfo_unexecuted_blocks=1 00:21:16.759 00:21:16.759 ' 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.759 --rc genhtml_branch_coverage=1 00:21:16.759 --rc genhtml_function_coverage=1 00:21:16.759 --rc genhtml_legend=1 00:21:16.759 --rc geninfo_all_blocks=1 00:21:16.759 --rc geninfo_unexecuted_blocks=1 00:21:16.759 00:21:16.759 ' 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.759 --rc genhtml_branch_coverage=1 00:21:16.759 --rc genhtml_function_coverage=1 00:21:16.759 --rc genhtml_legend=1 00:21:16.759 --rc geninfo_all_blocks=1 00:21:16.759 --rc geninfo_unexecuted_blocks=1 00:21:16.759 00:21:16.759 ' 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:16.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.759 --rc genhtml_branch_coverage=1 00:21:16.759 --rc genhtml_function_coverage=1 00:21:16.759 --rc genhtml_legend=1 00:21:16.759 --rc geninfo_all_blocks=1 00:21:16.759 --rc geninfo_unexecuted_blocks=1 00:21:16.759 00:21:16.759 ' 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83657 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83657 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83657 ']' 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.759 12:52:16 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.759 12:52:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:17.056 [2024-12-05 12:52:16.636935] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:17.056 [2024-12-05 12:52:16.637582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83657 ] 00:21:17.056 [2024-12-05 12:52:16.799241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.056 [2024-12-05 12:52:16.824072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.623 12:52:17 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.623 12:52:17 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:21:17.623 12:52:17 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:21:17.623 12:52:17 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:21:17.623 12:52:17 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:21:17.623 12:52:17 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:21:17.623 12:52:17 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:18.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.758 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:18.758 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:18.758 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:21:18.758 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:21:18.758 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:18.758 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:21:18.759 nvme0n1 00:21:18.759 nvme0n2 00:21:18.759 nvme0n3 00:21:18.759 nvme1n1 00:21:18.759 nvme2n1 00:21:18.759 nvme3n1 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.759 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.759 12:52:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:19.018 12:52:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.018 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:19.018 12:52:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.018 12:52:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:19.018 12:52:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.018 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:21:19.018 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:21:19.018 12:52:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.018 12:52:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:19.018 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:21:19.018 12:52:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.018 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:21:19.019 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "20a9c952-99e7-4534-9caa-8cdb907d17f0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "20a9c952-99e7-4534-9caa-8cdb907d17f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "69a8a539-ebbb-4a05-8642-baecd309a2d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "69a8a539-ebbb-4a05-8642-baecd309a2d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c859cedc-6ec7-49e8-b82c-aa547279faf9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c859cedc-6ec7-49e8-b82c-aa547279faf9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2bda77d1-4426-4fe6-8499-e9c3f01bf065"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2bda77d1-4426-4fe6-8499-e9c3f01bf065",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "784589b2-86ef-4788-9ff9-9ff346496b88"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "784589b2-86ef-4788-9ff9-9ff346496b88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "201d64f3-a03f-427a-8bd7-edf867524af5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "201d64f3-a03f-427a-8bd7-edf867524af5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:19.019 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:21:19.019 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:21:19.019 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:21:19.019 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:21:19.019 12:52:18 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83657 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83657 ']' 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83657 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83657 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.019 killing process with pid 83657 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83657' 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83657 00:21:19.019 12:52:18 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83657 00:21:19.277 12:52:19 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:19.277 12:52:19 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:21:19.277 12:52:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:19.277 12:52:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.277 12:52:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:19.277 ************************************ 00:21:19.277 START TEST bdev_hello_world 00:21:19.277 ************************************ 00:21:19.277 12:52:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:21:19.537 [2024-12-05 12:52:19.135644] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:19.537 [2024-12-05 12:52:19.135771] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83919 ] 00:21:19.537 [2024-12-05 12:52:19.291980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.537 [2024-12-05 12:52:19.317248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.798 [2024-12-05 12:52:19.530854] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:19.798 [2024-12-05 12:52:19.530905] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:21:19.798 [2024-12-05 12:52:19.530927] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:19.798 [2024-12-05 12:52:19.532955] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:19.798 [2024-12-05 12:52:19.533595] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:19.798 [2024-12-05 12:52:19.533619] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:19.798 [2024-12-05 12:52:19.533804] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:19.798 00:21:19.798 [2024-12-05 12:52:19.533844] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:20.057 00:21:20.057 real 0m0.637s 00:21:20.057 user 0m0.305s 00:21:20.057 sys 0m0.188s 00:21:20.057 12:52:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.057 12:52:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:20.057 ************************************ 00:21:20.057 END TEST bdev_hello_world 00:21:20.057 ************************************ 00:21:20.057 12:52:19 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:21:20.057 12:52:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:20.057 12:52:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.057 12:52:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:20.057 ************************************ 00:21:20.057 START TEST bdev_bounds 00:21:20.057 ************************************ 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83949 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:20.057 Process bdevio pid: 83949 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83949' 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83949 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83949 ']' 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:20.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.057 12:52:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:20.057 [2024-12-05 12:52:19.841882] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:20.057 [2024-12-05 12:52:19.842021] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83949 ] 00:21:20.315 [2024-12-05 12:52:20.006334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:20.315 [2024-12-05 12:52:20.042260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.315 [2024-12-05 12:52:20.042920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.315 [2024-12-05 12:52:20.043138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.884 12:52:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.884 12:52:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:20.884 12:52:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:21.170 I/O targets: 00:21:21.170 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:21.170 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:21.170 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:21.170 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:21:21.170 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:21:21.170 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:21:21.170 00:21:21.170 00:21:21.170 CUnit - A unit testing framework for C - Version 2.1-3 00:21:21.170 http://cunit.sourceforge.net/ 00:21:21.170 00:21:21.170 00:21:21.171 Suite: bdevio tests on: nvme3n1 00:21:21.171 Test: blockdev write read block ...passed 00:21:21.171 Test: blockdev write zeroes read block ...passed 00:21:21.171 Test: blockdev write zeroes read no split ...passed 00:21:21.171 Test: blockdev write zeroes read split ...passed 00:21:21.171 Test: blockdev write zeroes read split partial ...passed 00:21:21.171 Test: blockdev reset ...passed 00:21:21.171 Test: blockdev write read 8 blocks ...passed 00:21:21.171 Test: blockdev write read size > 128k ...passed 00:21:21.171 Test: blockdev write read invalid size ...passed 00:21:21.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:21.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:21.171 Test: blockdev write read max offset ...passed 00:21:21.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:21.171 Test: blockdev writev readv 8 blocks ...passed 00:21:21.171 Test: blockdev writev readv 30 x 1block ...passed 00:21:21.171 Test: blockdev writev readv block ...passed 00:21:21.171 Test: blockdev writev readv size > 128k ...passed 00:21:21.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:21.171 Test: blockdev comparev and writev ...passed 00:21:21.171 Test: blockdev nvme passthru rw ...passed 00:21:21.171 Test: blockdev nvme passthru vendor specific ...passed 00:21:21.171 Test: blockdev nvme admin passthru ...passed 00:21:21.171 Test: blockdev copy ...passed 00:21:21.171 Suite: bdevio tests on: nvme2n1 00:21:21.171 Test: blockdev write read block ...passed 00:21:21.171 Test: blockdev write zeroes read block ...passed 00:21:21.171 Test: blockdev write zeroes read no split ...passed 00:21:21.171 Test: blockdev write zeroes read split ...passed 00:21:21.171 Test: blockdev write zeroes read split partial ...passed 00:21:21.171 Test: blockdev reset ...passed 00:21:21.171 Test: blockdev write read 8 blocks ...passed 00:21:21.171 Test: blockdev write read size > 128k ...passed 00:21:21.171 Test: blockdev write read invalid size ...passed 00:21:21.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:21.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:21.171 Test: blockdev write read max offset ...passed 00:21:21.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:21.171 Test: blockdev writev readv 8 blocks ...passed 00:21:21.171 Test: blockdev writev readv 30 x 1block ...passed 00:21:21.171 Test: blockdev writev readv block ...passed 00:21:21.171 Test: blockdev writev readv size > 128k ...passed 00:21:21.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:21.171 Test: blockdev comparev and writev ...passed 00:21:21.171 Test: blockdev nvme passthru rw ...passed 00:21:21.171 Test: blockdev nvme passthru vendor specific ...passed 00:21:21.171 Test: blockdev nvme admin passthru ...passed 00:21:21.171 Test: blockdev copy ...passed 00:21:21.171 Suite: bdevio tests on: nvme1n1 00:21:21.171 Test: blockdev write read block ...passed 00:21:21.171 Test: blockdev write zeroes read block ...passed 00:21:21.171 Test: blockdev write zeroes read no split ...passed 00:21:21.171 Test: blockdev write zeroes read split ...passed 00:21:21.171 Test: blockdev write zeroes read split partial ...passed 00:21:21.171 Test: blockdev reset ...passed 00:21:21.171 Test: blockdev write read 8 blocks ...passed 00:21:21.171 Test: blockdev write read size > 128k ...passed 00:21:21.171 Test: blockdev write read invalid size ...passed 00:21:21.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:21.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:21.171 Test: blockdev write read max offset ...passed 00:21:21.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:21.171 Test: blockdev writev readv 8 blocks ...passed 00:21:21.171 Test: blockdev writev readv 30 x 1block ...passed 00:21:21.171 Test: blockdev writev readv block ...passed 00:21:21.171 Test: blockdev writev readv size > 128k ...passed 00:21:21.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:21.171 Test: blockdev comparev and writev ...passed 00:21:21.171 Test: blockdev nvme passthru rw ...passed 00:21:21.171 Test: blockdev nvme passthru vendor specific ...passed 00:21:21.171 Test: blockdev nvme admin passthru ...passed 00:21:21.171 Test: blockdev copy ...passed 00:21:21.171 Suite: bdevio tests on: nvme0n3 00:21:21.171 Test: blockdev write read block ...passed 00:21:21.171 Test: blockdev write zeroes read block ...passed 00:21:21.171 Test: blockdev write zeroes read no split ...passed 00:21:21.171 Test: blockdev write zeroes read split ...passed 00:21:21.171 Test: blockdev write zeroes read split partial ...passed 00:21:21.171 Test: blockdev reset ...passed 00:21:21.171 Test: blockdev write read 8 blocks ...passed 00:21:21.171 Test: blockdev write read size > 128k ...passed 00:21:21.171 Test: blockdev write read invalid size ...passed 00:21:21.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:21.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:21.171 Test: blockdev write read max offset ...passed 00:21:21.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:21.171 Test: blockdev writev readv 8 blocks ...passed 00:21:21.171 Test: blockdev writev readv 30 x 1block ...passed 00:21:21.171 Test: blockdev writev readv block ...passed 00:21:21.171 Test: blockdev writev readv size > 128k ...passed 00:21:21.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:21.171 Test: blockdev comparev and writev ...passed 00:21:21.171 Test: blockdev nvme passthru rw ...passed 00:21:21.171 Test: blockdev nvme passthru vendor specific ...passed 00:21:21.171 Test: blockdev nvme admin passthru ...passed 00:21:21.171 Test: blockdev copy ...passed 00:21:21.171 Suite: bdevio tests on: nvme0n2 00:21:21.171 Test: blockdev write read block ...passed 00:21:21.171 Test: blockdev write zeroes read block ...passed 00:21:21.171 Test: blockdev write zeroes read no split ...passed 00:21:21.171 Test: blockdev write zeroes read split ...passed 00:21:21.171 Test: blockdev write zeroes read split partial ...passed 00:21:21.171 Test: blockdev reset ...passed 00:21:21.171 Test: blockdev write read 8 blocks ...passed 00:21:21.171 Test: blockdev write read size > 128k ...passed 00:21:21.171 Test: blockdev write read invalid size ...passed 00:21:21.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:21.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:21.171 Test: blockdev write read max offset ...passed 00:21:21.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:21.171 Test: blockdev writev readv 8 blocks ...passed 00:21:21.171 Test: blockdev writev readv 30 x 1block ...passed 00:21:21.171 Test: blockdev writev readv block ...passed 00:21:21.171 Test: blockdev writev readv size > 128k ...passed 00:21:21.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:21.171 Test: blockdev comparev and writev ...passed 00:21:21.171 Test: blockdev nvme passthru rw ...passed 00:21:21.171 Test: blockdev nvme passthru vendor specific ...passed 00:21:21.171 Test: blockdev nvme admin passthru ...passed 00:21:21.171 Test: blockdev copy ...passed 00:21:21.171 Suite: bdevio tests on: nvme0n1 00:21:21.171 Test: blockdev write read block ...passed 00:21:21.171 Test: blockdev write zeroes read block ...passed 00:21:21.171 Test: blockdev write zeroes read no split ...passed 00:21:21.171 Test: blockdev write zeroes read split ...passed 00:21:21.171 Test: blockdev write zeroes read split partial ...passed 00:21:21.171 Test: blockdev reset ...passed 00:21:21.171 Test: blockdev write read 8 blocks ...passed 00:21:21.171 Test: blockdev write read size > 128k ...passed 00:21:21.171 Test: blockdev write read invalid size ...passed 00:21:21.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:21.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:21.171 Test: blockdev write read max offset ...passed 00:21:21.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:21.171 Test: blockdev writev readv 8 blocks ...passed 00:21:21.171 Test: blockdev writev readv 30 x 1block ...passed 00:21:21.171 Test: blockdev writev readv block ...passed 00:21:21.171 Test: blockdev writev readv size > 128k ...passed 00:21:21.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:21.171 Test: blockdev comparev and writev ...passed 00:21:21.171 Test: blockdev nvme passthru rw ...passed 00:21:21.171 Test: blockdev nvme passthru vendor specific ...passed 00:21:21.171 Test: blockdev nvme admin passthru ...passed 00:21:21.171 Test: blockdev copy ...passed 00:21:21.171 00:21:21.172 Run Summary: Type Total Ran Passed Failed Inactive 00:21:21.172 suites 6 6 n/a 0 0 00:21:21.172 tests 138 138 138 0 0 00:21:21.172 asserts 780 780 780 0 n/a 00:21:21.172 00:21:21.172 Elapsed time = 0.434 seconds 00:21:21.172 0 00:21:21.172 12:52:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83949 00:21:21.172 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83949 ']' 00:21:21.172 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83949 00:21:21.172 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:21.172 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83949 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.432 killing process with pid 83949 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83949' 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83949 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83949 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:21.432 00:21:21.432 real 0m1.446s 00:21:21.432 user 0m3.536s 00:21:21.432 sys 0m0.328s 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.432 ************************************ 00:21:21.432 END TEST bdev_bounds 00:21:21.432 ************************************ 00:21:21.432 12:52:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:21.433 12:52:21 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:21:21.433 12:52:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:21.433 12:52:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.433 12:52:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:21.694 ************************************ 00:21:21.694 START TEST bdev_nbd 00:21:21.694 ************************************ 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83996 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83996 /var/tmp/spdk-nbd.sock 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83996 ']' 00:21:21.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.694 12:52:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:21.694 [2024-12-05 12:52:21.367283] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:21.694 [2024-12-05 12:52:21.367441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.694 [2024-12-05 12:52:21.528395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.954 [2024-12-05 12:52:21.553828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:22.524 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:21:22.783 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:22.783 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:22.783 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:22.783 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:22.783 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:22.783 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:22.784 1+0 records in 00:21:22.784 1+0 records out 00:21:22.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116935 s, 3.5 MB/s 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:22.784 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.046 1+0 records in 00:21:23.046 1+0 records out 00:21:23.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108468 s, 3.8 MB/s 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:23.046 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:21:23.304 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:21:23.304 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:21:23.304 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:21:23.304 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:21:23.304 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:23.304 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:23.304 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.305 1+0 records in 00:21:23.305 1+0 records out 00:21:23.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130468 s, 3.1 MB/s 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:23.305 12:52:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.563 1+0 records in 00:21:23.563 1+0 records out 00:21:23.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651526 s, 6.3 MB/s 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:23.563 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:23.564 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:23.564 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.823 1+0 records in 00:21:23.823 1+0 records out 00:21:23.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00193222 s, 2.1 MB/s 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:23.823 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:24.092 1+0 records in 00:21:24.092 1+0 records out 00:21:24.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087552 s, 4.7 MB/s 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:24.092 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:24.352 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd0", 00:21:24.352 "bdev_name": "nvme0n1" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd1", 00:21:24.352 "bdev_name": "nvme0n2" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd2", 00:21:24.352 "bdev_name": "nvme0n3" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd3", 00:21:24.352 "bdev_name": "nvme1n1" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd4", 00:21:24.352 "bdev_name": "nvme2n1" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd5", 00:21:24.352 "bdev_name": "nvme3n1" 00:21:24.352 } 00:21:24.352 ]' 00:21:24.352 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:24.352 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd0", 00:21:24.352 "bdev_name": "nvme0n1" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd1", 00:21:24.352 "bdev_name": "nvme0n2" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd2", 00:21:24.352 "bdev_name": "nvme0n3" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd3", 00:21:24.352 "bdev_name": "nvme1n1" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd4", 00:21:24.352 "bdev_name": "nvme2n1" 00:21:24.352 }, 00:21:24.352 { 00:21:24.352 "nbd_device": "/dev/nbd5", 00:21:24.352 "bdev_name": "nvme3n1" 00:21:24.352 } 00:21:24.352 ]' 00:21:24.352 12:52:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:24.352 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:21:24.352 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:24.352 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:21:24.352 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:24.352 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:24.352 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.352 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.613 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.872 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.132 12:52:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.443 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:25.703 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:25.962 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:21:26.221 /dev/nbd0 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:26.221 1+0 records in 00:21:26.221 1+0 records out 00:21:26.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535719 s, 7.6 MB/s 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:26.221 12:52:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:21:26.481 /dev/nbd1 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:26.481 1+0 records in 00:21:26.481 1+0 records out 00:21:26.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137698 s, 3.0 MB/s 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:26.481 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:21:26.742 /dev/nbd10 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:26.742 1+0 records in 00:21:26.742 1+0 records out 00:21:26.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789227 s, 5.2 MB/s 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.742 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:26.743 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.743 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:26.743 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:26.743 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:26.743 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:26.743 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:21:26.743 /dev/nbd11 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:27.003 1+0 records in 00:21:27.003 1+0 records out 00:21:27.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118535 s, 3.5 MB/s 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:27.003 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:21:27.263 /dev/nbd12 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:27.263 1+0 records in 00:21:27.263 1+0 records out 00:21:27.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110534 s, 3.7 MB/s 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:27.263 12:52:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:21:27.524 /dev/nbd13 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:27.524 1+0 records in 00:21:27.524 1+0 records out 00:21:27.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128335 s, 3.2 MB/s 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:27.524 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd0", 00:21:27.784 "bdev_name": "nvme0n1" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd1", 00:21:27.784 "bdev_name": "nvme0n2" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd10", 00:21:27.784 "bdev_name": "nvme0n3" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd11", 00:21:27.784 "bdev_name": "nvme1n1" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd12", 00:21:27.784 "bdev_name": "nvme2n1" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd13", 00:21:27.784 "bdev_name": "nvme3n1" 00:21:27.784 } 00:21:27.784 ]' 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd0", 00:21:27.784 "bdev_name": "nvme0n1" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd1", 00:21:27.784 "bdev_name": "nvme0n2" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd10", 00:21:27.784 "bdev_name": "nvme0n3" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd11", 00:21:27.784 "bdev_name": "nvme1n1" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd12", 00:21:27.784 "bdev_name": "nvme2n1" 00:21:27.784 }, 00:21:27.784 { 00:21:27.784 "nbd_device": "/dev/nbd13", 00:21:27.784 "bdev_name": "nvme3n1" 00:21:27.784 } 00:21:27.784 ]' 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:27.784 /dev/nbd1 00:21:27.784 /dev/nbd10 00:21:27.784 /dev/nbd11 00:21:27.784 /dev/nbd12 00:21:27.784 /dev/nbd13' 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:27.784 /dev/nbd1 00:21:27.784 /dev/nbd10 00:21:27.784 /dev/nbd11 00:21:27.784 /dev/nbd12 00:21:27.784 /dev/nbd13' 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:27.784 256+0 records in 00:21:27.784 256+0 records out 00:21:27.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650324 s, 161 MB/s 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:27.784 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:28.045 256+0 records in 00:21:28.045 256+0 records out 00:21:28.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248728 s, 4.2 MB/s 00:21:28.045 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:28.045 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:28.306 256+0 records in 00:21:28.306 256+0 records out 00:21:28.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.258167 s, 4.1 MB/s 00:21:28.306 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:28.306 12:52:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:21:28.565 256+0 records in 00:21:28.565 256+0 records out 00:21:28.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242233 s, 4.3 MB/s 00:21:28.565 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:28.565 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:21:28.565 256+0 records in 00:21:28.565 256+0 records out 00:21:28.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182881 s, 5.7 MB/s 00:21:28.565 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:28.565 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:21:29.134 256+0 records in 00:21:29.134 256+0 records out 00:21:29.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.326744 s, 3.2 MB/s 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:21:29.134 256+0 records in 00:21:29.134 256+0 records out 00:21:29.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220614 s, 4.8 MB/s 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:21:29.134 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:29.393 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:29.393 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:29.393 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:29.393 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.393 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:29.393 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.393 12:52:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.393 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.652 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.911 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:30.170 12:52:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:30.429 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:21:30.689 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:21:30.689 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:21:30.689 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:21:30.689 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.689 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.690 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:21:30.690 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:30.690 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.690 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:30.690 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:30.690 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:30.950 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:30.951 malloc_lvol_verify 00:21:31.209 12:52:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:31.209 c88bf23f-f54a-46d7-a1ec-6a5664a69848 00:21:31.209 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:31.468 b405b240-8612-4203-9ff9-997fd3066246 00:21:31.726 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:31.726 /dev/nbd0 00:21:31.726 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:31.726 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:31.726 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:31.726 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:31.726 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:31.726 mke2fs 1.47.0 (5-Feb-2023) 00:21:31.726 Discarding device blocks: 0/4096 done 00:21:31.726 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:31.726 00:21:31.726 Allocating group tables: 0/1 done 00:21:31.726 Writing inode tables: 0/1 done 00:21:31.726 Creating journal (1024 blocks): done 00:21:31.727 Writing superblocks and filesystem accounting information: 0/1 done 00:21:31.727 00:21:31.727 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:31.727 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:31.727 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:31.727 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:31.727 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:31.727 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:31.727 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83996 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83996 ']' 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83996 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83996 00:21:31.986 killing process with pid 83996 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83996' 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83996 00:21:31.986 12:52:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83996 00:21:32.245 ************************************ 00:21:32.245 END TEST bdev_nbd 00:21:32.245 ************************************ 00:21:32.245 12:52:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:32.245 00:21:32.245 real 0m10.720s 00:21:32.245 user 0m14.619s 00:21:32.245 sys 0m3.863s 00:21:32.245 12:52:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.245 12:52:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:32.245 12:52:32 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:21:32.245 12:52:32 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:21:32.245 12:52:32 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:21:32.245 12:52:32 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:21:32.245 12:52:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:32.246 12:52:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.246 12:52:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:32.246 ************************************ 00:21:32.246 START TEST bdev_fio 00:21:32.246 ************************************ 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:32.246 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:32.246 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:32.506 ************************************ 00:21:32.506 START TEST bdev_fio_rw_verify 00:21:32.506 ************************************ 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:32.506 12:52:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:32.506 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:32.506 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:32.506 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:32.506 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:32.506 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:32.506 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:32.506 fio-3.35 00:21:32.506 Starting 6 threads 00:21:44.794 00:21:44.794 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=84403: Thu Dec 5 12:52:42 2024 00:21:44.794 read: IOPS=14.9k, BW=58.3MiB/s (61.2MB/s)(583MiB/10002msec) 00:21:44.794 slat (usec): min=2, max=2593, avg= 7.05, stdev=19.88 00:21:44.795 clat (usec): min=75, max=8902, avg=1260.41, stdev=791.20 00:21:44.795 lat (usec): min=80, max=8906, avg=1267.46, stdev=791.89 00:21:44.795 clat percentiles (usec): 00:21:44.795 | 50.000th=[ 1156], 99.000th=[ 3720], 99.900th=[ 5080], 99.990th=[ 6849], 00:21:44.795 | 99.999th=[ 8717] 00:21:44.795 write: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(595MiB/10002msec); 0 zone resets 00:21:44.795 slat (usec): min=3, max=5982, avg=45.58, stdev=143.59 00:21:44.795 clat (usec): min=87, max=14352, avg=1532.76, stdev=838.02 00:21:44.795 lat (usec): min=103, max=14394, avg=1578.35, stdev=850.86 00:21:44.795 clat percentiles (usec): 00:21:44.795 | 50.000th=[ 1418], 99.000th=[ 4113], 99.900th=[ 5604], 99.990th=[ 8225], 00:21:44.795 | 99.999th=[14353] 00:21:44.795 bw ( KiB/s): min=49142, max=77102, per=100.00%, avg=60942.21, stdev=1490.27, samples=114 00:21:44.795 iops : min=12284, max=19274, avg=15234.95, stdev=372.59, samples=114 00:21:44.795 lat (usec) : 100=0.01%, 250=3.95%, 500=8.46%, 750=10.41%, 1000=11.94% 00:21:44.795 lat (msec) : 2=45.31%, 4=19.03%, 10=0.89%, 20=0.01% 00:21:44.795 cpu : usr=46.19%, sys=30.64%, ctx=5528, majf=0, minf=15071 00:21:44.795 IO depths : 1=11.5%, 2=24.1%, 4=51.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:44.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.795 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.795 issued rwts: total=149373,152258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.795 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:44.795 00:21:44.795 Run status group 0 (all jobs): 00:21:44.795 READ: bw=58.3MiB/s (61.2MB/s), 58.3MiB/s-58.3MiB/s (61.2MB/s-61.2MB/s), io=583MiB (612MB), run=10002-10002msec 00:21:44.795 WRITE: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=595MiB (624MB), run=10002-10002msec 00:21:44.795 ----------------------------------------------------- 00:21:44.795 Suppressions used: 00:21:44.795 count bytes template 00:21:44.795 6 48 /usr/src/fio/parse.c 00:21:44.795 2789 267744 /usr/src/fio/iolog.c 00:21:44.795 1 8 libtcmalloc_minimal.so 00:21:44.795 1 904 libcrypto.so 00:21:44.795 ----------------------------------------------------- 00:21:44.795 00:21:44.795 00:21:44.795 real 0m11.191s 00:21:44.795 user 0m28.445s 00:21:44.795 sys 0m18.701s 00:21:44.795 ************************************ 00:21:44.795 END TEST bdev_fio_rw_verify 00:21:44.795 ************************************ 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "20a9c952-99e7-4534-9caa-8cdb907d17f0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "20a9c952-99e7-4534-9caa-8cdb907d17f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "69a8a539-ebbb-4a05-8642-baecd309a2d2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "69a8a539-ebbb-4a05-8642-baecd309a2d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c859cedc-6ec7-49e8-b82c-aa547279faf9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c859cedc-6ec7-49e8-b82c-aa547279faf9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2bda77d1-4426-4fe6-8499-e9c3f01bf065"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2bda77d1-4426-4fe6-8499-e9c3f01bf065",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "784589b2-86ef-4788-9ff9-9ff346496b88"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "784589b2-86ef-4788-9ff9-9ff346496b88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "201d64f3-a03f-427a-8bd7-edf867524af5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "201d64f3-a03f-427a-8bd7-edf867524af5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:44.795 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:44.796 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:44.796 /home/vagrant/spdk_repo/spdk 00:21:44.796 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:44.796 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:44.796 12:52:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:44.796 00:21:44.796 real 0m11.351s 00:21:44.796 user 0m28.518s 00:21:44.796 sys 0m18.771s 00:21:44.796 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.796 ************************************ 00:21:44.796 END TEST bdev_fio 00:21:44.796 ************************************ 00:21:44.796 12:52:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:44.796 12:52:43 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:44.796 12:52:43 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:44.796 12:52:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:44.796 12:52:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.796 12:52:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:44.796 ************************************ 00:21:44.796 START TEST bdev_verify 00:21:44.796 ************************************ 00:21:44.796 12:52:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:44.796 [2024-12-05 12:52:43.555365] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:44.796 [2024-12-05 12:52:43.555517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84568 ] 00:21:44.796 [2024-12-05 12:52:43.716601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:44.796 [2024-12-05 12:52:43.752923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.796 [2024-12-05 12:52:43.752936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.796 Running I/O for 5 seconds... 00:21:46.744 20480.00 IOPS, 80.00 MiB/s [2024-12-05T12:52:47.536Z] 21360.00 IOPS, 83.44 MiB/s [2024-12-05T12:52:48.481Z] 21632.00 IOPS, 84.50 MiB/s [2024-12-05T12:52:49.423Z] 21848.00 IOPS, 85.34 MiB/s [2024-12-05T12:52:49.423Z] 21408.00 IOPS, 83.63 MiB/s 00:21:49.571 Latency(us) 00:21:49.571 [2024-12-05T12:52:49.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.571 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x0 length 0x80000 00:21:49.571 nvme0n1 : 5.07 1690.62 6.60 0.00 0.00 75553.26 6049.48 80659.69 00:21:49.571 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x80000 length 0x80000 00:21:49.571 nvme0n1 : 5.08 1686.97 6.59 0.00 0.00 75716.44 6225.92 78239.90 00:21:49.571 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x0 length 0x80000 00:21:49.571 nvme0n2 : 5.05 1647.48 6.44 0.00 0.00 77346.49 9830.40 82272.89 00:21:49.571 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x80000 length 0x80000 00:21:49.571 nvme0n2 : 5.06 1669.62 6.52 0.00 0.00 76358.54 7713.08 79449.80 00:21:49.571 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x0 length 0x80000 00:21:49.571 nvme0n3 : 5.08 1662.17 6.49 0.00 0.00 76486.74 7410.61 83079.48 00:21:49.571 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x80000 length 0x80000 00:21:49.571 nvme0n3 : 5.07 1665.63 6.51 0.00 0.00 76377.37 11393.18 65737.65 00:21:49.571 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x0 length 0x20000 00:21:49.571 nvme1n1 : 5.05 1646.91 6.43 0.00 0.00 77016.09 10132.87 72190.42 00:21:49.571 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x20000 length 0x20000 00:21:49.571 nvme1n1 : 5.07 1665.07 6.50 0.00 0.00 76240.29 10384.94 71383.83 00:21:49.571 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x0 length 0xbd0bd 00:21:49.571 nvme2n1 : 5.08 2152.77 8.41 0.00 0.00 58719.50 6024.27 66544.25 00:21:49.571 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:49.571 nvme2n1 : 5.10 2228.23 8.70 0.00 0.00 56616.82 5545.35 68964.04 00:21:49.571 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0x0 length 0xa0000 00:21:49.571 nvme3n1 : 5.09 1734.99 6.78 0.00 0.00 72674.21 6351.95 74206.92 00:21:49.571 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:49.571 Verification LBA range: start 0xa0000 length 0xa0000 00:21:49.571 nvme3n1 : 5.09 1683.43 6.58 0.00 0.00 74917.98 6301.54 77433.30 00:21:49.571 [2024-12-05T12:52:49.423Z] =================================================================================================================== 00:21:49.571 [2024-12-05T12:52:49.423Z] Total : 21133.89 82.55 0.00 0.00 72064.99 5545.35 83079.48 00:21:49.571 ************************************ 00:21:49.571 END TEST bdev_verify 00:21:49.571 ************************************ 00:21:49.571 00:21:49.571 real 0m5.832s 00:21:49.571 user 0m9.379s 00:21:49.571 sys 0m1.391s 00:21:49.571 12:52:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.571 12:52:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:49.571 12:52:49 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:49.571 12:52:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:49.571 12:52:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.571 12:52:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:49.571 ************************************ 00:21:49.571 START TEST bdev_verify_big_io 00:21:49.571 ************************************ 00:21:49.571 12:52:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:49.829 [2024-12-05 12:52:49.461821] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:49.829 [2024-12-05 12:52:49.461970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84657 ] 00:21:49.829 [2024-12-05 12:52:49.621361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:49.829 [2024-12-05 12:52:49.649034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.829 [2024-12-05 12:52:49.649139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.086 Running I/O for 5 seconds... 00:21:55.965 1040.00 IOPS, 65.00 MiB/s [2024-12-05T12:52:56.075Z] 2247.50 IOPS, 140.47 MiB/s [2024-12-05T12:52:56.652Z] 2393.00 IOPS, 149.56 MiB/s 00:21:56.800 Latency(us) 00:21:56.800 [2024-12-05T12:52:56.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.800 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x0 length 0x8000 00:21:56.800 nvme0n1 : 6.17 124.44 7.78 0.00 0.00 998637.37 5747.00 1090519.04 00:21:56.800 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x8000 length 0x8000 00:21:56.800 nvme0n1 : 6.02 95.71 5.98 0.00 0.00 1303652.47 134701.69 1832588.21 00:21:56.800 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x0 length 0x8000 00:21:56.800 nvme0n2 : 6.02 82.41 5.15 0.00 0.00 1449937.65 232299.91 2632732.36 00:21:56.800 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x8000 length 0x8000 00:21:56.800 nvme0n2 : 6.02 100.14 6.26 0.00 0.00 1198954.16 138734.67 1529307.77 00:21:56.800 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x0 length 0x8000 00:21:56.800 nvme0n3 : 6.17 132.18 8.26 0.00 0.00 875196.46 72997.02 1155046.79 00:21:56.800 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x8000 length 0x8000 00:21:56.800 nvme0n3 : 6.02 116.87 7.30 0.00 0.00 996164.39 62511.26 922746.88 00:21:56.800 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x0 length 0x2000 00:21:56.800 nvme1n1 : 6.24 100.07 6.25 0.00 0.00 1124013.49 117763.15 2594015.70 00:21:56.800 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x2000 length 0x2000 00:21:56.800 nvme1n1 : 6.03 103.55 6.47 0.00 0.00 1081304.57 121796.14 2013265.92 00:21:56.800 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x0 length 0xbd0b 00:21:56.800 nvme2n1 : 6.52 144.89 9.06 0.00 0.00 741359.75 285.14 803370.54 00:21:56.800 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:56.800 nvme2n1 : 6.52 137.51 8.59 0.00 0.00 786502.59 343.43 1019538.51 00:21:56.800 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0x0 length 0xa000 00:21:56.800 nvme3n1 : 6.26 102.24 6.39 0.00 0.00 1039005.22 7208.96 2168132.53 00:21:56.800 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:56.800 Verification LBA range: start 0xa000 length 0xa000 00:21:56.800 nvme3n1 : 6.29 111.86 6.99 0.00 0.00 928878.60 6654.42 2129415.88 00:21:56.800 [2024-12-05T12:52:56.652Z] =================================================================================================================== 00:21:56.800 [2024-12-05T12:52:56.652Z] Total : 1351.86 84.49 0.00 0.00 1009531.64 285.14 2632732.36 00:21:57.059 00:21:57.059 real 0m7.274s 00:21:57.059 user 0m13.471s 00:21:57.059 sys 0m0.421s 00:21:57.059 12:52:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.059 ************************************ 00:21:57.059 END TEST bdev_verify_big_io 00:21:57.059 ************************************ 00:21:57.059 12:52:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:57.059 12:52:56 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:57.059 12:52:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:57.059 12:52:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.059 12:52:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:57.059 ************************************ 00:21:57.059 START TEST bdev_write_zeroes 00:21:57.059 ************************************ 00:21:57.059 12:52:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:57.059 [2024-12-05 12:52:56.818264] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:57.060 [2024-12-05 12:52:56.818411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84761 ] 00:21:57.318 [2024-12-05 12:52:56.979936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.318 [2024-12-05 12:52:57.005081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.577 Running I/O for 1 seconds... 00:21:58.514 61792.00 IOPS, 241.38 MiB/s 00:21:58.514 Latency(us) 00:21:58.514 [2024-12-05T12:52:58.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.514 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:58.514 nvme0n1 : 1.02 9925.15 38.77 0.00 0.00 12882.62 5520.15 27625.94 00:21:58.514 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:58.514 nvme0n2 : 1.02 9906.32 38.70 0.00 0.00 12896.37 5494.94 28230.89 00:21:58.514 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:58.514 nvme0n3 : 1.02 9891.00 38.64 0.00 0.00 12901.83 5167.26 28634.19 00:21:58.514 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:58.514 nvme1n1 : 1.03 9979.52 38.98 0.00 0.00 12777.82 5293.29 26819.35 00:21:58.514 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:58.514 nvme2n1 : 1.02 11341.31 44.30 0.00 0.00 11228.88 4184.22 23693.78 00:21:58.514 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:58.514 nvme3n1 : 1.03 10336.41 40.38 0.00 0.00 12236.20 5368.91 28432.54 00:21:58.514 [2024-12-05T12:52:58.366Z] =================================================================================================================== 00:21:58.514 [2024-12-05T12:52:58.366Z] Total : 61379.71 239.76 0.00 0.00 12455.70 4184.22 28634.19 00:21:58.775 00:21:58.775 real 0m1.727s 00:21:58.775 user 0m1.049s 00:21:58.775 sys 0m0.465s 00:21:58.775 12:52:58 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.775 12:52:58 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:58.775 ************************************ 00:21:58.775 END TEST bdev_write_zeroes 00:21:58.775 ************************************ 00:21:58.775 12:52:58 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:58.775 12:52:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:58.775 12:52:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.775 12:52:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:58.775 ************************************ 00:21:58.775 START TEST bdev_json_nonenclosed 00:21:58.775 ************************************ 00:21:58.775 12:52:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:58.775 [2024-12-05 12:52:58.611889] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:58.775 [2024-12-05 12:52:58.612016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84798 ] 00:21:59.061 [2024-12-05 12:52:58.769296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.061 [2024-12-05 12:52:58.794223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.061 [2024-12-05 12:52:58.794320] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:59.062 [2024-12-05 12:52:58.794337] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:59.062 [2024-12-05 12:52:58.794351] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:59.062 00:21:59.062 real 0m0.326s 00:21:59.062 user 0m0.131s 00:21:59.062 sys 0m0.090s 00:21:59.062 12:52:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.062 12:52:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:59.062 ************************************ 00:21:59.062 END TEST bdev_json_nonenclosed 00:21:59.062 ************************************ 00:21:59.323 12:52:58 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:59.323 12:52:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:59.323 12:52:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.323 12:52:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:59.323 ************************************ 00:21:59.323 START TEST bdev_json_nonarray 00:21:59.323 ************************************ 00:21:59.323 12:52:58 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:59.323 [2024-12-05 12:52:59.006137] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:21:59.323 [2024-12-05 12:52:59.006265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84823 ] 00:21:59.323 [2024-12-05 12:52:59.167271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.598 [2024-12-05 12:52:59.192775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.598 [2024-12-05 12:52:59.192905] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:59.598 [2024-12-05 12:52:59.192922] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:59.598 [2024-12-05 12:52:59.192935] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:59.598 00:21:59.598 real 0m0.332s 00:21:59.598 user 0m0.133s 00:21:59.598 sys 0m0.095s 00:21:59.598 12:52:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.598 12:52:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:59.598 ************************************ 00:21:59.598 END TEST bdev_json_nonarray 00:21:59.598 ************************************ 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:59.598 12:52:59 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:00.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:12.416 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:12.416 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:16.609 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:16.609 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:16.609 00:22:16.609 real 0m59.460s 00:22:16.609 user 1m14.904s 00:22:16.609 sys 0m45.849s 00:22:16.609 12:53:15 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.609 ************************************ 00:22:16.609 END TEST blockdev_xnvme 00:22:16.609 12:53:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:16.609 ************************************ 00:22:16.609 12:53:15 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:22:16.609 12:53:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:16.609 12:53:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.609 12:53:15 -- common/autotest_common.sh@10 -- # set +x 00:22:16.609 ************************************ 00:22:16.609 START TEST ublk 00:22:16.609 ************************************ 00:22:16.609 12:53:15 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:22:16.609 * Looking for test storage... 00:22:16.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.609 12:53:16 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.609 12:53:16 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.609 12:53:16 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.609 12:53:16 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.609 12:53:16 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.609 12:53:16 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.609 12:53:16 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.609 12:53:16 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.609 12:53:16 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.609 12:53:16 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.609 12:53:16 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.609 12:53:16 ublk -- scripts/common.sh@344 -- # case "$op" in 00:22:16.609 12:53:16 ublk -- scripts/common.sh@345 -- # : 1 00:22:16.609 12:53:16 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.609 12:53:16 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.609 12:53:16 ublk -- scripts/common.sh@365 -- # decimal 1 00:22:16.609 12:53:16 ublk -- scripts/common.sh@353 -- # local d=1 00:22:16.609 12:53:16 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.609 12:53:16 ublk -- scripts/common.sh@355 -- # echo 1 00:22:16.609 12:53:16 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.609 12:53:16 ublk -- scripts/common.sh@366 -- # decimal 2 00:22:16.609 12:53:16 ublk -- scripts/common.sh@353 -- # local d=2 00:22:16.609 12:53:16 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.609 12:53:16 ublk -- scripts/common.sh@355 -- # echo 2 00:22:16.609 12:53:16 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.609 12:53:16 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.609 12:53:16 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.609 12:53:16 ublk -- scripts/common.sh@368 -- # return 0 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.609 --rc genhtml_branch_coverage=1 00:22:16.609 --rc genhtml_function_coverage=1 00:22:16.609 --rc genhtml_legend=1 00:22:16.609 --rc geninfo_all_blocks=1 00:22:16.609 --rc geninfo_unexecuted_blocks=1 00:22:16.609 00:22:16.609 ' 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.609 --rc genhtml_branch_coverage=1 00:22:16.609 --rc genhtml_function_coverage=1 00:22:16.609 --rc genhtml_legend=1 00:22:16.609 --rc geninfo_all_blocks=1 00:22:16.609 --rc geninfo_unexecuted_blocks=1 00:22:16.609 00:22:16.609 ' 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.609 --rc genhtml_branch_coverage=1 00:22:16.609 --rc genhtml_function_coverage=1 00:22:16.609 --rc genhtml_legend=1 00:22:16.609 --rc geninfo_all_blocks=1 00:22:16.609 --rc geninfo_unexecuted_blocks=1 00:22:16.609 00:22:16.609 ' 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.609 --rc genhtml_branch_coverage=1 00:22:16.609 --rc genhtml_function_coverage=1 00:22:16.609 --rc genhtml_legend=1 00:22:16.609 --rc geninfo_all_blocks=1 00:22:16.609 --rc geninfo_unexecuted_blocks=1 00:22:16.609 00:22:16.609 ' 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:16.609 12:53:16 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:16.609 12:53:16 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:16.609 12:53:16 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:16.609 12:53:16 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:16.609 12:53:16 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:16.609 12:53:16 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:16.609 12:53:16 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:16.609 12:53:16 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:22:16.609 12:53:16 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.609 12:53:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.609 ************************************ 00:22:16.609 START TEST test_save_ublk_config 00:22:16.609 ************************************ 00:22:16.609 12:53:16 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:22:16.609 12:53:16 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:22:16.609 12:53:16 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=85135 00:22:16.609 12:53:16 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:22:16.609 12:53:16 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 85135 00:22:16.610 12:53:16 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85135 ']' 00:22:16.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.610 12:53:16 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.610 12:53:16 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.610 12:53:16 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.610 12:53:16 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.610 12:53:16 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:22:16.610 12:53:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:16.610 [2024-12-05 12:53:16.214986] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:22:16.610 [2024-12-05 12:53:16.215139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85135 ] 00:22:16.610 [2024-12-05 12:53:16.374294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.610 [2024-12-05 12:53:16.399588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:17.551 [2024-12-05 12:53:17.079828] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:17.551 [2024-12-05 12:53:17.080641] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:17.551 malloc0 00:22:17.551 [2024-12-05 12:53:17.111944] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:17.551 [2024-12-05 12:53:17.112030] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:17.551 [2024-12-05 12:53:17.112042] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:17.551 [2024-12-05 12:53:17.112055] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:17.551 [2024-12-05 12:53:17.120914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:17.551 [2024-12-05 12:53:17.120946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:17.551 [2024-12-05 12:53:17.123838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:17.551 [2024-12-05 12:53:17.123967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:17.551 [2024-12-05 12:53:17.134876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:17.551 0 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.551 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:17.812 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.812 12:53:17 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:22:17.812 "subsystems": [ 00:22:17.812 { 00:22:17.812 "subsystem": "fsdev", 00:22:17.812 "config": [ 00:22:17.812 { 00:22:17.812 "method": "fsdev_set_opts", 00:22:17.812 "params": { 00:22:17.812 "fsdev_io_pool_size": 65535, 00:22:17.812 "fsdev_io_cache_size": 256 00:22:17.812 } 00:22:17.812 } 00:22:17.812 ] 00:22:17.812 }, 00:22:17.812 { 00:22:17.812 "subsystem": "keyring", 00:22:17.812 "config": [] 00:22:17.812 }, 00:22:17.812 { 00:22:17.812 "subsystem": "iobuf", 00:22:17.812 "config": [ 00:22:17.812 { 00:22:17.812 "method": "iobuf_set_options", 00:22:17.812 "params": { 00:22:17.812 "small_pool_count": 8192, 00:22:17.812 "large_pool_count": 1024, 00:22:17.812 "small_bufsize": 8192, 00:22:17.812 "large_bufsize": 135168, 00:22:17.812 "enable_numa": false 00:22:17.812 } 00:22:17.812 } 00:22:17.812 ] 00:22:17.812 }, 00:22:17.812 { 00:22:17.812 "subsystem": "sock", 00:22:17.812 "config": [ 00:22:17.812 { 00:22:17.812 "method": "sock_set_default_impl", 00:22:17.812 "params": { 00:22:17.812 "impl_name": "posix" 00:22:17.812 } 00:22:17.812 }, 00:22:17.812 { 00:22:17.812 "method": "sock_impl_set_options", 00:22:17.812 "params": { 00:22:17.812 "impl_name": "ssl", 00:22:17.812 "recv_buf_size": 4096, 00:22:17.812 "send_buf_size": 4096, 00:22:17.812 "enable_recv_pipe": true, 00:22:17.812 "enable_quickack": false, 00:22:17.812 "enable_placement_id": 0, 00:22:17.812 "enable_zerocopy_send_server": true, 00:22:17.812 "enable_zerocopy_send_client": false, 00:22:17.812 "zerocopy_threshold": 0, 00:22:17.812 "tls_version": 0, 00:22:17.812 "enable_ktls": false 00:22:17.812 } 00:22:17.812 }, 00:22:17.812 { 00:22:17.812 "method": "sock_impl_set_options", 00:22:17.812 "params": { 00:22:17.812 "impl_name": "posix", 00:22:17.812 "recv_buf_size": 2097152, 00:22:17.812 "send_buf_size": 2097152, 00:22:17.812 "enable_recv_pipe": true, 00:22:17.812 "enable_quickack": false, 00:22:17.812 "enable_placement_id": 0, 00:22:17.813 "enable_zerocopy_send_server": true, 00:22:17.813 "enable_zerocopy_send_client": false, 00:22:17.813 "zerocopy_threshold": 0, 00:22:17.813 "tls_version": 0, 00:22:17.813 "enable_ktls": false 00:22:17.813 } 00:22:17.813 } 00:22:17.813 ] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "vmd", 00:22:17.813 "config": [] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "accel", 00:22:17.813 "config": [ 00:22:17.813 { 00:22:17.813 "method": "accel_set_options", 00:22:17.813 "params": { 00:22:17.813 "small_cache_size": 128, 00:22:17.813 "large_cache_size": 16, 00:22:17.813 "task_count": 2048, 00:22:17.813 "sequence_count": 2048, 00:22:17.813 "buf_count": 2048 00:22:17.813 } 00:22:17.813 } 00:22:17.813 ] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "bdev", 00:22:17.813 "config": [ 00:22:17.813 { 00:22:17.813 "method": "bdev_set_options", 00:22:17.813 "params": { 00:22:17.813 "bdev_io_pool_size": 65535, 00:22:17.813 "bdev_io_cache_size": 256, 00:22:17.813 "bdev_auto_examine": true, 00:22:17.813 "iobuf_small_cache_size": 128, 00:22:17.813 "iobuf_large_cache_size": 16 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "bdev_raid_set_options", 00:22:17.813 "params": { 00:22:17.813 "process_window_size_kb": 1024, 00:22:17.813 "process_max_bandwidth_mb_sec": 0 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "bdev_iscsi_set_options", 00:22:17.813 "params": { 00:22:17.813 "timeout_sec": 30 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "bdev_nvme_set_options", 00:22:17.813 "params": { 00:22:17.813 "action_on_timeout": "none", 00:22:17.813 "timeout_us": 0, 00:22:17.813 "timeout_admin_us": 0, 00:22:17.813 "keep_alive_timeout_ms": 10000, 00:22:17.813 "arbitration_burst": 0, 00:22:17.813 "low_priority_weight": 0, 00:22:17.813 "medium_priority_weight": 0, 00:22:17.813 "high_priority_weight": 0, 00:22:17.813 "nvme_adminq_poll_period_us": 10000, 00:22:17.813 "nvme_ioq_poll_period_us": 0, 00:22:17.813 "io_queue_requests": 0, 00:22:17.813 "delay_cmd_submit": true, 00:22:17.813 "transport_retry_count": 4, 00:22:17.813 "bdev_retry_count": 3, 00:22:17.813 "transport_ack_timeout": 0, 00:22:17.813 "ctrlr_loss_timeout_sec": 0, 00:22:17.813 "reconnect_delay_sec": 0, 00:22:17.813 "fast_io_fail_timeout_sec": 0, 00:22:17.813 "disable_auto_failback": false, 00:22:17.813 "generate_uuids": false, 00:22:17.813 "transport_tos": 0, 00:22:17.813 "nvme_error_stat": false, 00:22:17.813 "rdma_srq_size": 0, 00:22:17.813 "io_path_stat": false, 00:22:17.813 "allow_accel_sequence": false, 00:22:17.813 "rdma_max_cq_size": 0, 00:22:17.813 "rdma_cm_event_timeout_ms": 0, 00:22:17.813 "dhchap_digests": [ 00:22:17.813 "sha256", 00:22:17.813 "sha384", 00:22:17.813 "sha512" 00:22:17.813 ], 00:22:17.813 "dhchap_dhgroups": [ 00:22:17.813 "null", 00:22:17.813 "ffdhe2048", 00:22:17.813 "ffdhe3072", 00:22:17.813 "ffdhe4096", 00:22:17.813 "ffdhe6144", 00:22:17.813 "ffdhe8192" 00:22:17.813 ] 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "bdev_nvme_set_hotplug", 00:22:17.813 "params": { 00:22:17.813 "period_us": 100000, 00:22:17.813 "enable": false 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "bdev_malloc_create", 00:22:17.813 "params": { 00:22:17.813 "name": "malloc0", 00:22:17.813 "num_blocks": 8192, 00:22:17.813 "block_size": 4096, 00:22:17.813 "physical_block_size": 4096, 00:22:17.813 "uuid": "fb6224cf-198e-4e37-a764-5728623b68e9", 00:22:17.813 "optimal_io_boundary": 0, 00:22:17.813 "md_size": 0, 00:22:17.813 "dif_type": 0, 00:22:17.813 "dif_is_head_of_md": false, 00:22:17.813 "dif_pi_format": 0 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "bdev_wait_for_examine" 00:22:17.813 } 00:22:17.813 ] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "scsi", 00:22:17.813 "config": null 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "scheduler", 00:22:17.813 "config": [ 00:22:17.813 { 00:22:17.813 "method": "framework_set_scheduler", 00:22:17.813 "params": { 00:22:17.813 "name": "static" 00:22:17.813 } 00:22:17.813 } 00:22:17.813 ] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "vhost_scsi", 00:22:17.813 "config": [] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "vhost_blk", 00:22:17.813 "config": [] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "ublk", 00:22:17.813 "config": [ 00:22:17.813 { 00:22:17.813 "method": "ublk_create_target", 00:22:17.813 "params": { 00:22:17.813 "cpumask": "1" 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "ublk_start_disk", 00:22:17.813 "params": { 00:22:17.813 "bdev_name": "malloc0", 00:22:17.813 "ublk_id": 0, 00:22:17.813 "num_queues": 1, 00:22:17.813 "queue_depth": 128 00:22:17.813 } 00:22:17.813 } 00:22:17.813 ] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "nbd", 00:22:17.813 "config": [] 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "subsystem": "nvmf", 00:22:17.813 "config": [ 00:22:17.813 { 00:22:17.813 "method": "nvmf_set_config", 00:22:17.813 "params": { 00:22:17.813 "discovery_filter": "match_any", 00:22:17.813 "admin_cmd_passthru": { 00:22:17.813 "identify_ctrlr": false 00:22:17.813 }, 00:22:17.813 "dhchap_digests": [ 00:22:17.813 "sha256", 00:22:17.813 "sha384", 00:22:17.813 "sha512" 00:22:17.813 ], 00:22:17.813 "dhchap_dhgroups": [ 00:22:17.813 "null", 00:22:17.813 "ffdhe2048", 00:22:17.813 "ffdhe3072", 00:22:17.813 "ffdhe4096", 00:22:17.813 "ffdhe6144", 00:22:17.813 "ffdhe8192" 00:22:17.813 ] 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "nvmf_set_max_subsystems", 00:22:17.813 "params": { 00:22:17.813 "max_subsystems": 1024 00:22:17.813 } 00:22:17.813 }, 00:22:17.813 { 00:22:17.813 "method": "nvmf_set_crdt", 00:22:17.814 "params": { 00:22:17.814 "crdt1": 0, 00:22:17.814 "crdt2": 0, 00:22:17.814 "crdt3": 0 00:22:17.814 } 00:22:17.814 } 00:22:17.814 ] 00:22:17.814 }, 00:22:17.814 { 00:22:17.814 "subsystem": "iscsi", 00:22:17.814 "config": [ 00:22:17.814 { 00:22:17.814 "method": "iscsi_set_options", 00:22:17.814 "params": { 00:22:17.814 "node_base": "iqn.2016-06.io.spdk", 00:22:17.814 "max_sessions": 128, 00:22:17.814 "max_connections_per_session": 2, 00:22:17.814 "max_queue_depth": 64, 00:22:17.814 "default_time2wait": 2, 00:22:17.814 "default_time2retain": 20, 00:22:17.814 "first_burst_length": 8192, 00:22:17.814 "immediate_data": true, 00:22:17.814 "allow_duplicated_isid": false, 00:22:17.814 "error_recovery_level": 0, 00:22:17.814 "nop_timeout": 60, 00:22:17.814 "nop_in_interval": 30, 00:22:17.814 "disable_chap": false, 00:22:17.814 "require_chap": false, 00:22:17.814 "mutual_chap": false, 00:22:17.814 "chap_group": 0, 00:22:17.814 "max_large_datain_per_connection": 64, 00:22:17.814 "max_r2t_per_connection": 4, 00:22:17.814 "pdu_pool_size": 36864, 00:22:17.814 "immediate_data_pool_size": 16384, 00:22:17.814 "data_out_pool_size": 2048 00:22:17.814 } 00:22:17.814 } 00:22:17.814 ] 00:22:17.814 } 00:22:17.814 ] 00:22:17.814 }' 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 85135 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85135 ']' 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85135 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85135 00:22:17.814 killing process with pid 85135 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85135' 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85135 00:22:17.814 12:53:17 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85135 00:22:18.074 [2024-12-05 12:53:17.695530] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:18.074 [2024-12-05 12:53:17.730861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:18.074 [2024-12-05 12:53:17.731035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:18.074 [2024-12-05 12:53:17.737049] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:18.074 [2024-12-05 12:53:17.737128] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:18.074 [2024-12-05 12:53:17.737136] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:18.074 [2024-12-05 12:53:17.737166] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:18.074 [2024-12-05 12:53:17.737314] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:18.335 12:53:18 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=85173 00:22:18.335 12:53:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:22:18.335 12:53:18 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 85173 00:22:18.335 12:53:18 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85173 ']' 00:22:18.335 12:53:18 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.335 12:53:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:22:18.335 "subsystems": [ 00:22:18.335 { 00:22:18.335 "subsystem": "fsdev", 00:22:18.335 "config": [ 00:22:18.335 { 00:22:18.335 "method": "fsdev_set_opts", 00:22:18.335 "params": { 00:22:18.335 "fsdev_io_pool_size": 65535, 00:22:18.335 "fsdev_io_cache_size": 256 00:22:18.335 } 00:22:18.335 } 00:22:18.335 ] 00:22:18.335 }, 00:22:18.335 { 00:22:18.335 "subsystem": "keyring", 00:22:18.335 "config": [] 00:22:18.335 }, 00:22:18.335 { 00:22:18.335 "subsystem": "iobuf", 00:22:18.335 "config": [ 00:22:18.335 { 00:22:18.335 "method": "iobuf_set_options", 00:22:18.335 "params": { 00:22:18.335 "small_pool_count": 8192, 00:22:18.335 "large_pool_count": 1024, 00:22:18.335 "small_bufsize": 8192, 00:22:18.335 "large_bufsize": 135168, 00:22:18.335 "enable_numa": false 00:22:18.335 } 00:22:18.335 } 00:22:18.335 ] 00:22:18.335 }, 00:22:18.335 { 00:22:18.335 "subsystem": "sock", 00:22:18.335 "config": [ 00:22:18.335 { 00:22:18.335 "method": "sock_set_default_impl", 00:22:18.335 "params": { 00:22:18.335 "impl_name": "posix" 00:22:18.335 } 00:22:18.335 }, 00:22:18.335 { 00:22:18.335 "method": "sock_impl_set_options", 00:22:18.335 "params": { 00:22:18.335 "impl_name": "ssl", 00:22:18.335 "recv_buf_size": 4096, 00:22:18.335 "send_buf_size": 4096, 00:22:18.335 "enable_recv_pipe": true, 00:22:18.335 "enable_quickack": false, 00:22:18.335 "enable_placement_id": 0, 00:22:18.335 "enable_zerocopy_send_server": true, 00:22:18.335 "enable_zerocopy_send_client": false, 00:22:18.335 "zerocopy_threshold": 0, 00:22:18.335 "tls_version": 0, 00:22:18.335 "enable_ktls": false 00:22:18.335 } 00:22:18.335 }, 00:22:18.335 { 00:22:18.335 "method": "sock_impl_set_options", 00:22:18.335 "params": { 00:22:18.335 "impl_name": "posix", 00:22:18.335 "recv_buf_size": 2097152, 00:22:18.335 "send_buf_size": 2097152, 00:22:18.335 "enable_recv_pipe": true, 00:22:18.335 "enable_quickack": false, 00:22:18.335 "enable_placement_id": 0, 00:22:18.335 "enable_zerocopy_send_server": true, 00:22:18.335 "enable_zerocopy_send_client": false, 00:22:18.335 "zerocopy_threshold": 0, 00:22:18.335 "tls_version": 0, 00:22:18.335 "enable_ktls": false 00:22:18.335 } 00:22:18.335 } 00:22:18.335 ] 00:22:18.335 }, 00:22:18.335 { 00:22:18.335 "subsystem": "vmd", 00:22:18.335 "config": [] 00:22:18.335 }, 00:22:18.335 { 00:22:18.335 "subsystem": "accel", 00:22:18.335 "config": [ 00:22:18.335 { 00:22:18.335 "method": "accel_set_options", 00:22:18.335 "params": { 00:22:18.335 "small_cache_size": 128, 00:22:18.335 "large_cache_size": 16, 00:22:18.335 "task_count": 2048, 00:22:18.335 "sequence_count": 2048, 00:22:18.335 "buf_count": 2048 00:22:18.335 } 00:22:18.335 } 00:22:18.335 ] 00:22:18.335 }, 00:22:18.335 { 00:22:18.335 "subsystem": "bdev", 00:22:18.336 "config": [ 00:22:18.336 { 00:22:18.336 "method": "bdev_set_options", 00:22:18.336 "params": { 00:22:18.336 "bdev_io_pool_size": 65535, 00:22:18.336 "bdev_io_cache_size": 256, 00:22:18.336 "bdev_auto_examine": true, 00:22:18.336 "iobuf_small_cache_size": 128, 00:22:18.336 "iobuf_large_cache_size": 16 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "bdev_raid_set_options", 00:22:18.336 "params": { 00:22:18.336 "process_window_size_kb": 1024, 00:22:18.336 "process_max_bandwidth_mb_sec": 0 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "bdev_iscsi_set_options", 00:22:18.336 "params": { 00:22:18.336 "timeout_sec": 30 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "bdev_nvme_set_options", 00:22:18.336 "params": { 00:22:18.336 "action_on_timeout": "none", 00:22:18.336 "timeout_us": 0, 00:22:18.336 "timeout_admin_us": 0, 00:22:18.336 "keep_alive_timeout_ms": 10000, 00:22:18.336 "arbitration_burst": 0, 00:22:18.336 "low_priority_weight": 0, 00:22:18.336 "medium_priority_weight": 0, 00:22:18.336 "high_priority_weight": 0, 00:22:18.336 "nvme_adminq_poll_period_us": 10000, 00:22:18.336 "nvme_ioq_poll_period_us": 0, 00:22:18.336 "io_queue_requests": 0, 00:22:18.336 "delay_cmd_submit": true, 00:22:18.336 "transport_retry_count": 4, 00:22:18.336 "bdev_retry_count": 3, 00:22:18.336 "transport_ack_timeout": 0, 00:22:18.336 "ctrlr_loss_timeout_sec": 0, 00:22:18.336 "reconnect_delay_sec": 0, 00:22:18.336 "fast_io_fail_timeout_sec": 0, 00:22:18.336 "disable_auto_failback": false, 00:22:18.336 "generate_uuids": false, 00:22:18.336 "transport_tos": 0, 00:22:18.336 "nvme_error_stat": false, 00:22:18.336 "rdma_srq_size": 0, 00:22:18.336 "io_path_stat": false, 00:22:18.336 "allow_accel_sequence": false, 00:22:18.336 "rdma_max_cq_size": 0, 00:22:18.336 "rdma_cm_event_timeout_ms": 0, 00:22:18.336 "dhchap_digests": [ 00:22:18.336 "sha256", 00:22:18.336 "sha384", 00:22:18.336 "sha512" 00:22:18.336 ], 00:22:18.336 "dhchap_dhgroups": [ 00:22:18.336 "null", 00:22:18.336 "ffdhe2048", 00:22:18.336 "ffdhe3072", 00:22:18.336 "ffdhe4096", 00:22:18.336 "ffdhe6144", 00:22:18.336 "ffdhe8192" 00:22:18.336 ] 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "bdev_nvme_set_hotplug", 00:22:18.336 "params": { 00:22:18.336 "period_us": 100000, 00:22:18.336 "enable": false 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "bdev_malloc_create", 00:22:18.336 "params": { 00:22:18.336 "name": "malloc0", 00:22:18.336 "num_blocks": 8192, 00:22:18.336 "block_size": 4096, 00:22:18.336 "physical_block_size": 4096, 00:22:18.336 "uuid": "fb6224cf-198e-4e37-a764-5728623b68e9", 00:22:18.336 "optimal_io_boundary": 0, 00:22:18.336 "md_size": 0, 00:22:18.336 "dif_type": 0, 00:22:18.336 "dif_is_head_of_md": false, 00:22:18.336 "dif_pi_format": 0 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "bdev_wait_for_examine" 00:22:18.336 } 00:22:18.336 ] 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "subsystem": "scsi", 00:22:18.336 "config": null 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "subsystem": "scheduler", 00:22:18.336 "config": [ 00:22:18.336 { 00:22:18.336 "method": "framework_set_scheduler", 00:22:18.336 "params": { 00:22:18.336 "name": "static" 00:22:18.336 } 00:22:18.336 } 00:22:18.336 ] 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "subsystem": "vhost_scsi", 00:22:18.336 "config": [] 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "subsystem": "vhost_blk", 00:22:18.336 "config": [] 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "subsystem": "ublk", 00:22:18.336 "config": [ 00:22:18.336 { 00:22:18.336 "method": "ublk_create_target", 00:22:18.336 "params": { 00:22:18.336 "cpumask": "1" 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "ublk_start_disk", 00:22:18.336 "params": { 00:22:18.336 "bdev_name": "malloc0", 00:22:18.336 "ublk_id": 0, 00:22:18.336 "num_queues": 1, 00:22:18.336 "queue_depth": 128 00:22:18.336 } 00:22:18.336 } 00:22:18.336 ] 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "subsystem": "nbd", 00:22:18.336 "config": [] 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "subsystem": "nvmf", 00:22:18.336 "config": [ 00:22:18.336 { 00:22:18.336 "method": "nvmf_set_config", 00:22:18.336 "params": { 00:22:18.336 "discovery_filter": "match_any", 00:22:18.336 "admin_cmd_passthru": { 00:22:18.336 "identify_ctrlr": false 00:22:18.336 }, 00:22:18.336 "dhchap_digests": [ 00:22:18.336 "sha256", 00:22:18.336 "sha384", 00:22:18.336 "sha512" 00:22:18.336 ], 00:22:18.336 "dhchap_dhgroups": [ 00:22:18.336 "null", 00:22:18.336 "ffdhe2048", 00:22:18.336 "ffdhe3072", 00:22:18.336 "ffdhe4096", 00:22:18.336 "ffdhe6144", 00:22:18.336 "ffdhe8192" 00:22:18.336 ] 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "nvmf_set_max_subsystems", 00:22:18.336 "params": { 00:22:18.336 "max_subsystems": 1024 00:22:18.336 } 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "method": "nvmf_set_crdt", 00:22:18.336 "params": { 00:22:18.336 "crdt1": 0, 00:22:18.336 "crdt2": 0, 00:22:18.336 "crdt3": 0 00:22:18.336 } 00:22:18.336 } 00:22:18.336 ] 00:22:18.336 }, 00:22:18.336 { 00:22:18.336 "subsystem": "iscsi", 00:22:18.336 "config": [ 00:22:18.336 { 00:22:18.336 "method": "iscsi_set_options", 00:22:18.336 "params": { 00:22:18.336 "node_base": "iqn.2016-06.io.spdk", 00:22:18.336 "max_sessions": 128, 00:22:18.336 "max_connections_per_session": 2, 00:22:18.336 "max_queue_depth": 64, 00:22:18.336 "default_time2wait": 2, 00:22:18.336 "default_time2retain": 20, 00:22:18.336 "first_burst_length": 8192, 00:22:18.336 "immediate_data": true, 00:22:18.336 "allow_duplicated_isid": false, 00:22:18.336 "error_recovery_level": 0, 00:22:18.336 "nop_timeout": 60, 00:22:18.336 "nop_in_interval": 30, 00:22:18.336 "disable_chap": false, 00:22:18.336 "require_chap": false, 00:22:18.336 "mutual_chap": false, 00:22:18.336 "chap_group": 0, 00:22:18.336 "max_large_datain_per_connection": 64, 00:22:18.336 "max_r2t_per_connection": 4, 00:22:18.336 "pdu_pool_size": 36864, 00:22:18.336 "immediate_data_pool_size": 16384, 00:22:18.336 "data_out_pool_size": 2048 00:22:18.336 } 00:22:18.336 } 00:22:18.336 ] 00:22:18.336 } 00:22:18.336 ] 00:22:18.336 }' 00:22:18.336 12:53:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.336 12:53:18 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.336 12:53:18 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.336 12:53:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:18.598 [2024-12-05 12:53:18.201867] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:22:18.598 [2024-12-05 12:53:18.202017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85173 ] 00:22:18.598 [2024-12-05 12:53:18.363572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.598 [2024-12-05 12:53:18.388717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.168 [2024-12-05 12:53:18.748826] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:19.168 [2024-12-05 12:53:18.749145] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:19.168 [2024-12-05 12:53:18.756949] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:19.168 [2024-12-05 12:53:18.757025] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:19.168 [2024-12-05 12:53:18.757033] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:19.168 [2024-12-05 12:53:18.757043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:19.168 [2024-12-05 12:53:18.765906] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:19.168 [2024-12-05 12:53:18.765925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:19.168 [2024-12-05 12:53:18.771071] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:19.168 [2024-12-05 12:53:18.771193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:19.168 [2024-12-05 12:53:18.777912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 85173 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85173 ']' 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85173 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85173 00:22:19.427 killing process with pid 85173 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85173' 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85173 00:22:19.427 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85173 00:22:19.686 [2024-12-05 12:53:19.345218] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:19.686 [2024-12-05 12:53:19.382938] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:19.686 [2024-12-05 12:53:19.383115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:19.686 [2024-12-05 12:53:19.387947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:19.686 [2024-12-05 12:53:19.388014] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:19.686 [2024-12-05 12:53:19.388029] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:19.686 [2024-12-05 12:53:19.388058] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:19.686 [2024-12-05 12:53:19.388220] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:19.945 12:53:19 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:22:19.945 00:22:19.945 real 0m3.651s 00:22:19.945 user 0m2.575s 00:22:19.945 sys 0m1.754s 00:22:19.945 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.945 12:53:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:19.945 ************************************ 00:22:19.945 END TEST test_save_ublk_config 00:22:19.945 ************************************ 00:22:20.205 12:53:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=85223 00:22:20.205 12:53:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.205 12:53:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:20.205 12:53:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 85223 00:22:20.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.205 12:53:19 ublk -- common/autotest_common.sh@835 -- # '[' -z 85223 ']' 00:22:20.206 12:53:19 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.206 12:53:19 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.206 12:53:19 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.206 12:53:19 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.206 12:53:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:20.206 [2024-12-05 12:53:19.940669] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:22:20.206 [2024-12-05 12:53:19.940916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85223 ] 00:22:20.466 [2024-12-05 12:53:20.118412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:20.466 [2024-12-05 12:53:20.145366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.466 [2024-12-05 12:53:20.145534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.034 12:53:20 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.034 12:53:20 ublk -- common/autotest_common.sh@868 -- # return 0 00:22:21.034 12:53:20 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:22:21.034 12:53:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:21.034 12:53:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.034 12:53:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:21.034 ************************************ 00:22:21.034 START TEST test_create_ublk 00:22:21.034 ************************************ 00:22:21.034 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:22:21.035 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:22:21.035 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.035 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:21.035 [2024-12-05 12:53:20.780833] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:21.035 [2024-12-05 12:53:20.782421] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:21.035 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.035 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:22:21.035 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:22:21.035 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.035 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:21.035 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.035 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:22:21.035 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:21.035 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.035 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:21.035 [2024-12-05 12:53:20.868995] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:21.035 [2024-12-05 12:53:20.869406] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:21.035 [2024-12-05 12:53:20.869421] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:21.035 [2024-12-05 12:53:20.869439] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:21.035 [2024-12-05 12:53:20.875007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:21.035 [2024-12-05 12:53:20.875050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:21.035 [2024-12-05 12:53:20.877571] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:21.035 [2024-12-05 12:53:20.878307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:21.295 [2024-12-05 12:53:20.894834] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:21.295 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:22:21.295 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.295 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:21.295 12:53:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:22:21.295 { 00:22:21.295 "ublk_device": "/dev/ublkb0", 00:22:21.295 "id": 0, 00:22:21.295 "queue_depth": 512, 00:22:21.295 "num_queues": 4, 00:22:21.295 "bdev_name": "Malloc0" 00:22:21.295 } 00:22:21.295 ]' 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:22:21.295 12:53:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:22:21.295 12:53:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:22:21.295 12:53:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:22:21.295 12:53:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:22:21.295 12:53:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:22:21.295 12:53:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:21.295 12:53:21 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:22:21.295 12:53:21 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:22:21.295 12:53:21 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:22:21.295 12:53:21 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:22:21.295 12:53:21 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:22:21.295 12:53:21 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:22:21.295 12:53:21 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:22:21.295 12:53:21 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:22:21.295 12:53:21 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:22:21.296 12:53:21 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:21.296 12:53:21 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:21.296 12:53:21 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:22:21.557 fio: verification read phase will never start because write phase uses all of runtime 00:22:21.557 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:22:21.557 fio-3.35 00:22:21.557 Starting 1 process 00:22:31.574 00:22:31.575 fio_test: (groupid=0, jobs=1): err= 0: pid=85263: Thu Dec 5 12:53:31 2024 00:22:31.575 write: IOPS=15.5k, BW=60.5MiB/s (63.4MB/s)(605MiB/10001msec); 0 zone resets 00:22:31.575 clat (usec): min=40, max=4096, avg=63.58, stdev=102.70 00:22:31.575 lat (usec): min=40, max=4096, avg=64.14, stdev=102.85 00:22:31.575 clat percentiles (usec): 00:22:31.575 | 1.00th=[ 44], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 49], 00:22:31.575 | 30.00th=[ 51], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 56], 00:22:31.575 | 70.00th=[ 58], 80.00th=[ 60], 90.00th=[ 64], 95.00th=[ 70], 00:22:31.575 | 99.00th=[ 359], 99.50th=[ 404], 99.90th=[ 1811], 99.95th=[ 2737], 00:22:31.575 | 99.99th=[ 3523] 00:22:31.575 bw ( KiB/s): min=56000, max=68200, per=100.00%, avg=62088.63, stdev=3681.04, samples=19 00:22:31.575 iops : min=14000, max=17050, avg=15522.05, stdev=920.17, samples=19 00:22:31.575 lat (usec) : 50=26.15%, 100=71.67%, 250=0.15%, 500=1.84%, 750=0.02% 00:22:31.575 lat (usec) : 1000=0.01% 00:22:31.575 lat (msec) : 2=0.05%, 4=0.09%, 10=0.01% 00:22:31.575 cpu : usr=3.29%, sys=12.80%, ctx=154955, majf=0, minf=796 00:22:31.575 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.575 issued rwts: total=0,154920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.575 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:31.575 00:22:31.575 Run status group 0 (all jobs): 00:22:31.575 WRITE: bw=60.5MiB/s (63.4MB/s), 60.5MiB/s-60.5MiB/s (63.4MB/s-63.4MB/s), io=605MiB (635MB), run=10001-10001msec 00:22:31.575 00:22:31.575 Disk stats (read/write): 00:22:31.575 ublkb0: ios=0/153734, merge=0/0, ticks=0/7600, in_queue=7600, util=99.12% 00:22:31.575 12:53:31 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.575 [2024-12-05 12:53:31.349217] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:31.575 [2024-12-05 12:53:31.389408] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:31.575 [2024-12-05 12:53:31.390541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:31.575 [2024-12-05 12:53:31.396856] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:31.575 [2024-12-05 12:53:31.397194] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:31.575 [2024-12-05 12:53:31.397209] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.575 12:53:31 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.575 [2024-12-05 12:53:31.412998] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:22:31.575 request: 00:22:31.575 { 00:22:31.575 "ublk_id": 0, 00:22:31.575 "method": "ublk_stop_disk", 00:22:31.575 "req_id": 1 00:22:31.575 } 00:22:31.575 Got JSON-RPC error response 00:22:31.575 response: 00:22:31.575 { 00:22:31.575 "code": -19, 00:22:31.575 "message": "No such device" 00:22:31.575 } 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:31.575 12:53:31 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.575 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.836 [2024-12-05 12:53:31.428959] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:31.836 [2024-12-05 12:53:31.430848] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:31.836 [2024-12-05 12:53:31.430900] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:31.836 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.836 12:53:31 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:31.836 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.836 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.836 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.836 12:53:31 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:22:31.836 12:53:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:31.836 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.836 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.836 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.836 12:53:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:31.836 12:53:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:22:31.836 12:53:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:31.836 12:53:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:31.836 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.837 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.837 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.837 12:53:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:31.837 12:53:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:22:31.837 12:53:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:31.837 00:22:31.837 real 0m10.851s 00:22:31.837 user 0m0.681s 00:22:31.837 sys 0m1.388s 00:22:31.837 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.837 12:53:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.837 ************************************ 00:22:31.837 END TEST test_create_ublk 00:22:31.837 ************************************ 00:22:31.837 12:53:31 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:22:31.837 12:53:31 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:31.837 12:53:31 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.837 12:53:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.097 ************************************ 00:22:32.097 START TEST test_create_multi_ublk 00:22:32.097 ************************************ 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.097 [2024-12-05 12:53:31.704845] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:32.097 [2024-12-05 12:53:31.706306] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.097 [2024-12-05 12:53:31.816012] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:32.097 [2024-12-05 12:53:31.816427] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:32.097 [2024-12-05 12:53:31.816438] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:32.097 [2024-12-05 12:53:31.816445] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:32.097 [2024-12-05 12:53:31.827922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:32.097 [2024-12-05 12:53:31.827953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:32.097 [2024-12-05 12:53:31.839853] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:32.097 [2024-12-05 12:53:31.840550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:32.097 [2024-12-05 12:53:31.874864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.097 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.358 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.359 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:22:32.359 12:53:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:22:32.359 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.359 12:53:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 [2024-12-05 12:53:31.978990] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:22:32.359 [2024-12-05 12:53:31.979392] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:22:32.359 [2024-12-05 12:53:31.979401] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:32.359 [2024-12-05 12:53:31.979410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:32.359 [2024-12-05 12:53:31.990873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:32.359 [2024-12-05 12:53:31.990911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:32.359 [2024-12-05 12:53:32.002849] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:32.359 [2024-12-05 12:53:32.003595] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:32.359 [2024-12-05 12:53:32.042863] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.359 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.359 [2024-12-05 12:53:32.151434] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:22:32.359 [2024-12-05 12:53:32.151844] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:22:32.359 [2024-12-05 12:53:32.151886] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:22:32.359 [2024-12-05 12:53:32.151894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:22:32.359 [2024-12-05 12:53:32.153378] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:32.359 [2024-12-05 12:53:32.153398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:32.359 [2024-12-05 12:53:32.161871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:32.359 [2024-12-05 12:53:32.162549] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:22:32.359 [2024-12-05 12:53:32.201859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.621 [2024-12-05 12:53:32.309984] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:22:32.621 [2024-12-05 12:53:32.310403] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:22:32.621 [2024-12-05 12:53:32.310416] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:22:32.621 [2024-12-05 12:53:32.310425] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:22:32.621 [2024-12-05 12:53:32.321869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:32.621 [2024-12-05 12:53:32.321908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:32.621 [2024-12-05 12:53:32.333858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:32.621 [2024-12-05 12:53:32.334574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:22:32.621 [2024-12-05 12:53:32.345907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.621 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:22:32.621 { 00:22:32.621 "ublk_device": "/dev/ublkb0", 00:22:32.621 "id": 0, 00:22:32.621 "queue_depth": 512, 00:22:32.621 "num_queues": 4, 00:22:32.621 "bdev_name": "Malloc0" 00:22:32.621 }, 00:22:32.621 { 00:22:32.621 "ublk_device": "/dev/ublkb1", 00:22:32.621 "id": 1, 00:22:32.621 "queue_depth": 512, 00:22:32.621 "num_queues": 4, 00:22:32.621 "bdev_name": "Malloc1" 00:22:32.621 }, 00:22:32.621 { 00:22:32.621 "ublk_device": "/dev/ublkb2", 00:22:32.622 "id": 2, 00:22:32.622 "queue_depth": 512, 00:22:32.622 "num_queues": 4, 00:22:32.622 "bdev_name": "Malloc2" 00:22:32.622 }, 00:22:32.622 { 00:22:32.622 "ublk_device": "/dev/ublkb3", 00:22:32.622 "id": 3, 00:22:32.622 "queue_depth": 512, 00:22:32.622 "num_queues": 4, 00:22:32.622 "bdev_name": "Malloc3" 00:22:32.622 } 00:22:32.622 ]' 00:22:32.622 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:22:32.622 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:32.622 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:22:32.622 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:32.622 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:22:32.622 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:22:32.622 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:22:32.882 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:33.142 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:22:33.401 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:33.401 12:53:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:33.401 [2024-12-05 12:53:33.033986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:33.401 [2024-12-05 12:53:33.074397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:33.401 [2024-12-05 12:53:33.075649] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:33.401 [2024-12-05 12:53:33.081850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:33.401 [2024-12-05 12:53:33.082161] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:33.401 [2024-12-05 12:53:33.082169] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:33.401 [2024-12-05 12:53:33.096993] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:33.401 [2024-12-05 12:53:33.130399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:33.401 [2024-12-05 12:53:33.131655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:33.401 [2024-12-05 12:53:33.145838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:33.401 [2024-12-05 12:53:33.146186] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:33.401 [2024-12-05 12:53:33.146197] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:33.401 [2024-12-05 12:53:33.157040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:22:33.401 [2024-12-05 12:53:33.194871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:33.401 [2024-12-05 12:53:33.195740] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:22:33.401 [2024-12-05 12:53:33.200821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:33.401 [2024-12-05 12:53:33.201163] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:22:33.401 [2024-12-05 12:53:33.201177] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.401 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:33.401 [2024-12-05 12:53:33.208936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:22:33.401 [2024-12-05 12:53:33.248908] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:33.401 [2024-12-05 12:53:33.249704] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:22:33.683 [2024-12-05 12:53:33.256855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:33.683 [2024-12-05 12:53:33.257183] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:22:33.683 [2024-12-05 12:53:33.257191] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:22:33.683 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.683 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:22:33.683 [2024-12-05 12:53:33.472997] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:33.683 [2024-12-05 12:53:33.474156] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:33.683 [2024-12-05 12:53:33.474202] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:33.683 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:22:33.683 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.683 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:33.683 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.683 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.944 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:34.202 ************************************ 00:22:34.202 END TEST test_create_multi_ublk 00:22:34.202 ************************************ 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:34.202 00:22:34.202 real 0m2.231s 00:22:34.202 user 0m0.807s 00:22:34.202 sys 0m0.175s 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.202 12:53:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:34.202 12:53:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:34.202 12:53:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:34.202 12:53:33 ublk -- ublk/ublk.sh@130 -- # killprocess 85223 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@954 -- # '[' -z 85223 ']' 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@958 -- # kill -0 85223 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@959 -- # uname 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85223 00:22:34.202 killing process with pid 85223 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85223' 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@973 -- # kill 85223 00:22:34.202 12:53:33 ublk -- common/autotest_common.sh@978 -- # wait 85223 00:22:34.463 [2024-12-05 12:53:34.269875] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:34.463 [2024-12-05 12:53:34.269956] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:34.724 00:22:34.725 real 0m18.549s 00:22:34.725 user 0m28.734s 00:22:34.725 sys 0m8.004s 00:22:34.725 12:53:34 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.725 12:53:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:34.725 ************************************ 00:22:34.725 END TEST ublk 00:22:34.725 ************************************ 00:22:34.725 12:53:34 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:34.725 12:53:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:34.725 12:53:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.725 12:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:34.725 ************************************ 00:22:34.725 START TEST ublk_recovery 00:22:34.725 ************************************ 00:22:34.725 12:53:34 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:34.985 * Looking for test storage... 00:22:34.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.985 12:53:34 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:34.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.985 --rc genhtml_branch_coverage=1 00:22:34.985 --rc genhtml_function_coverage=1 00:22:34.985 --rc genhtml_legend=1 00:22:34.985 --rc geninfo_all_blocks=1 00:22:34.985 --rc geninfo_unexecuted_blocks=1 00:22:34.985 00:22:34.985 ' 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:34.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.985 --rc genhtml_branch_coverage=1 00:22:34.985 --rc genhtml_function_coverage=1 00:22:34.985 --rc genhtml_legend=1 00:22:34.985 --rc geninfo_all_blocks=1 00:22:34.985 --rc geninfo_unexecuted_blocks=1 00:22:34.985 00:22:34.985 ' 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:34.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.985 --rc genhtml_branch_coverage=1 00:22:34.985 --rc genhtml_function_coverage=1 00:22:34.985 --rc genhtml_legend=1 00:22:34.985 --rc geninfo_all_blocks=1 00:22:34.985 --rc geninfo_unexecuted_blocks=1 00:22:34.985 00:22:34.985 ' 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:34.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.985 --rc genhtml_branch_coverage=1 00:22:34.985 --rc genhtml_function_coverage=1 00:22:34.985 --rc genhtml_legend=1 00:22:34.985 --rc geninfo_all_blocks=1 00:22:34.985 --rc geninfo_unexecuted_blocks=1 00:22:34.985 00:22:34.985 ' 00:22:34.985 12:53:34 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:34.985 12:53:34 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:34.985 12:53:34 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:34.985 12:53:34 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:34.985 12:53:34 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:34.985 12:53:34 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:34.985 12:53:34 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:34.985 12:53:34 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:34.985 12:53:34 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:34.985 12:53:34 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:34.985 12:53:34 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85587 00:22:34.985 12:53:34 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.985 12:53:34 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85587 00:22:34.985 12:53:34 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:34.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85587 ']' 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.985 12:53:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.986 [2024-12-05 12:53:34.786548] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:22:34.986 [2024-12-05 12:53:34.786877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85587 ] 00:22:35.246 [2024-12-05 12:53:34.948383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:35.246 [2024-12-05 12:53:34.975406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.246 [2024-12-05 12:53:34.975545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.838 12:53:35 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.838 12:53:35 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:35.838 12:53:35 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:35.838 12:53:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.838 12:53:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.838 [2024-12-05 12:53:35.677830] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:35.838 [2024-12-05 12:53:35.679306] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:35.838 12:53:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.839 12:53:35 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:35.839 12:53:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.839 12:53:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.098 malloc0 00:22:36.098 12:53:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.098 12:53:35 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:36.098 12:53:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.098 12:53:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.098 [2024-12-05 12:53:35.725966] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:36.098 [2024-12-05 12:53:35.726084] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:36.098 [2024-12-05 12:53:35.726092] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:36.098 [2024-12-05 12:53:35.726101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:36.098 [2024-12-05 12:53:35.734929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:36.098 [2024-12-05 12:53:35.734962] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:36.098 [2024-12-05 12:53:35.741834] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:36.098 [2024-12-05 12:53:35.741989] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:36.098 [2024-12-05 12:53:35.764842] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:36.098 1 00:22:36.098 12:53:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.098 12:53:35 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:37.039 12:53:36 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85621 00:22:37.039 12:53:36 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:37.039 12:53:36 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:37.039 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:37.039 fio-3.35 00:22:37.039 Starting 1 process 00:22:42.330 12:53:41 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85587 00:22:42.330 12:53:41 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:47.618 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85587 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:47.618 12:53:46 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85733 00:22:47.618 12:53:46 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:47.618 12:53:46 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.618 12:53:46 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85733 00:22:47.618 12:53:46 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85733 ']' 00:22:47.618 12:53:46 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.618 12:53:46 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.618 12:53:46 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.618 12:53:46 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.618 12:53:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.618 [2024-12-05 12:53:46.871777] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:22:47.618 [2024-12-05 12:53:46.871934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85733 ] 00:22:47.618 [2024-12-05 12:53:47.031732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:47.618 [2024-12-05 12:53:47.057978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.618 [2024-12-05 12:53:47.058153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:48.188 12:53:47 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.188 [2024-12-05 12:53:47.816832] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:48.188 [2024-12-05 12:53:47.818337] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.188 12:53:47 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.188 malloc0 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.188 12:53:47 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.188 [2024-12-05 12:53:47.856986] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:48.188 [2024-12-05 12:53:47.857032] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:48.188 [2024-12-05 12:53:47.857041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:48.188 [2024-12-05 12:53:47.864892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:48.188 [2024-12-05 12:53:47.864932] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:48.188 1 00:22:48.188 12:53:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.188 12:53:47 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85621 00:22:49.248 [2024-12-05 12:53:48.864994] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:49.248 [2024-12-05 12:53:48.871832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:49.248 [2024-12-05 12:53:48.871864] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:50.191 [2024-12-05 12:53:49.871899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:50.191 [2024-12-05 12:53:49.875850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:50.191 [2024-12-05 12:53:49.875872] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:51.135 [2024-12-05 12:53:50.875920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:51.135 [2024-12-05 12:53:50.883845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:51.135 [2024-12-05 12:53:50.883890] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:51.135 [2024-12-05 12:53:50.883900] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:51.135 [2024-12-05 12:53:50.884012] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:23:13.079 [2024-12-05 12:54:12.067839] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:23:13.080 [2024-12-05 12:54:12.074662] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:23:13.080 [2024-12-05 12:54:12.082194] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:23:13.080 [2024-12-05 12:54:12.082237] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:39.650 00:23:39.650 fio_test: (groupid=0, jobs=1): err= 0: pid=85624: Thu Dec 5 12:54:37 2024 00:23:39.650 read: IOPS=14.0k, BW=54.8MiB/s (57.4MB/s)(3286MiB/60002msec) 00:23:39.650 slat (nsec): min=938, max=959436, avg=5116.34, stdev=3528.64 00:23:39.650 clat (usec): min=547, max=30309k, avg=4382.99, stdev=255928.86 00:23:39.650 lat (usec): min=552, max=30309k, avg=4388.11, stdev=255928.86 00:23:39.650 clat percentiles (usec): 00:23:39.650 | 1.00th=[ 1647], 5.00th=[ 1811], 10.00th=[ 1844], 20.00th=[ 1876], 00:23:39.650 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1958], 60.00th=[ 2040], 00:23:39.650 | 70.00th=[ 2343], 80.00th=[ 2442], 90.00th=[ 2606], 95.00th=[ 3261], 00:23:39.650 | 99.00th=[ 5342], 99.50th=[ 5997], 99.90th=[ 7767], 99.95th=[10945], 00:23:39.650 | 99.99th=[13829] 00:23:39.650 bw ( KiB/s): min=43272, max=128304, per=100.00%, avg=112229.20, stdev=18816.86, samples=59 00:23:39.650 iops : min=10818, max=32076, avg=28057.29, stdev=4704.21, samples=59 00:23:39.650 write: IOPS=14.0k, BW=54.7MiB/s (57.3MB/s)(3282MiB/60002msec); 0 zone resets 00:23:39.650 slat (nsec): min=965, max=2690.2k, avg=5183.05, stdev=5868.32 00:23:39.650 clat (usec): min=528, max=30309k, avg=4741.56, stdev=272636.73 00:23:39.650 lat (usec): min=532, max=30309k, avg=4746.75, stdev=272636.73 00:23:39.650 clat percentiles (usec): 00:23:39.650 | 1.00th=[ 1680], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1975], 00:23:39.650 | 30.00th=[ 1991], 40.00th=[ 2024], 50.00th=[ 2040], 60.00th=[ 2114], 00:23:39.650 | 70.00th=[ 2409], 80.00th=[ 2540], 90.00th=[ 2671], 95.00th=[ 3195], 00:23:39.650 | 99.00th=[ 5407], 99.50th=[ 6063], 99.90th=[ 7898], 99.95th=[11076], 00:23:39.650 | 99.99th=[13960] 00:23:39.650 bw ( KiB/s): min=43768, max=128752, per=100.00%, avg=112079.10, stdev=18525.22, samples=59 00:23:39.650 iops : min=10942, max=32188, avg=28019.76, stdev=4631.30, samples=59 00:23:39.650 lat (usec) : 750=0.01%, 1000=0.01% 00:23:39.650 lat (msec) : 2=44.99%, 4=52.03%, 10=2.92%, 20=0.05%, >=2000=0.01% 00:23:39.650 cpu : usr=3.61%, sys=14.70%, ctx=57844, majf=0, minf=15 00:23:39.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:39.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:39.650 issued rwts: total=841222,840112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:39.650 00:23:39.650 Run status group 0 (all jobs): 00:23:39.650 READ: bw=54.8MiB/s (57.4MB/s), 54.8MiB/s-54.8MiB/s (57.4MB/s-57.4MB/s), io=3286MiB (3446MB), run=60002-60002msec 00:23:39.650 WRITE: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=3282MiB (3441MB), run=60002-60002msec 00:23:39.650 00:23:39.650 Disk stats (read/write): 00:23:39.650 ublkb1: ios=837707/836695, merge=0/0, ticks=3607933/3837793, in_queue=7445726, util=99.95% 00:23:39.650 12:54:37 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.650 [2024-12-05 12:54:37.045550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:39.650 [2024-12-05 12:54:37.091858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:39.650 [2024-12-05 12:54:37.092019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:39.650 [2024-12-05 12:54:37.099845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:39.650 [2024-12-05 12:54:37.099951] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:39.650 [2024-12-05 12:54:37.099964] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.650 12:54:37 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.650 [2024-12-05 12:54:37.115941] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:39.650 [2024-12-05 12:54:37.117361] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:39.650 [2024-12-05 12:54:37.117398] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.650 12:54:37 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:39.650 12:54:37 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:39.650 12:54:37 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85733 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85733 ']' 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85733 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85733 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85733' 00:23:39.650 killing process with pid 85733 00:23:39.650 12:54:37 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85733 00:23:39.651 12:54:37 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85733 00:23:39.651 [2024-12-05 12:54:37.388384] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:39.651 [2024-12-05 12:54:37.388453] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:39.651 00:23:39.651 real 1m3.208s 00:23:39.651 user 1m45.612s 00:23:39.651 sys 0m21.554s 00:23:39.651 12:54:37 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.651 ************************************ 00:23:39.651 END TEST ublk_recovery 00:23:39.651 ************************************ 00:23:39.651 12:54:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.651 12:54:37 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:39.651 12:54:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:39.651 12:54:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.651 12:54:37 -- common/autotest_common.sh@10 -- # set +x 00:23:39.651 12:54:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:23:39.651 12:54:37 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:39.651 12:54:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:39.651 12:54:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.651 12:54:37 -- common/autotest_common.sh@10 -- # set +x 00:23:39.651 ************************************ 00:23:39.651 START TEST ftl 00:23:39.651 ************************************ 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:39.651 * Looking for test storage... 00:23:39.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:39.651 12:54:37 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.651 12:54:37 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.651 12:54:37 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.651 12:54:37 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.651 12:54:37 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.651 12:54:37 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.651 12:54:37 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.651 12:54:37 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.651 12:54:37 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.651 12:54:37 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.651 12:54:37 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.651 12:54:37 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:39.651 12:54:37 ftl -- scripts/common.sh@345 -- # : 1 00:23:39.651 12:54:37 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.651 12:54:37 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.651 12:54:37 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:39.651 12:54:37 ftl -- scripts/common.sh@353 -- # local d=1 00:23:39.651 12:54:37 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.651 12:54:37 ftl -- scripts/common.sh@355 -- # echo 1 00:23:39.651 12:54:37 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.651 12:54:37 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:39.651 12:54:37 ftl -- scripts/common.sh@353 -- # local d=2 00:23:39.651 12:54:37 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.651 12:54:37 ftl -- scripts/common.sh@355 -- # echo 2 00:23:39.651 12:54:37 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.651 12:54:37 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.651 12:54:37 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.651 12:54:37 ftl -- scripts/common.sh@368 -- # return 0 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:39.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.651 --rc genhtml_branch_coverage=1 00:23:39.651 --rc genhtml_function_coverage=1 00:23:39.651 --rc genhtml_legend=1 00:23:39.651 --rc geninfo_all_blocks=1 00:23:39.651 --rc geninfo_unexecuted_blocks=1 00:23:39.651 00:23:39.651 ' 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:39.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.651 --rc genhtml_branch_coverage=1 00:23:39.651 --rc genhtml_function_coverage=1 00:23:39.651 --rc genhtml_legend=1 00:23:39.651 --rc geninfo_all_blocks=1 00:23:39.651 --rc geninfo_unexecuted_blocks=1 00:23:39.651 00:23:39.651 ' 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:39.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.651 --rc genhtml_branch_coverage=1 00:23:39.651 --rc genhtml_function_coverage=1 00:23:39.651 --rc genhtml_legend=1 00:23:39.651 --rc geninfo_all_blocks=1 00:23:39.651 --rc geninfo_unexecuted_blocks=1 00:23:39.651 00:23:39.651 ' 00:23:39.651 12:54:37 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:39.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.651 --rc genhtml_branch_coverage=1 00:23:39.651 --rc genhtml_function_coverage=1 00:23:39.651 --rc genhtml_legend=1 00:23:39.651 --rc geninfo_all_blocks=1 00:23:39.651 --rc geninfo_unexecuted_blocks=1 00:23:39.651 00:23:39.651 ' 00:23:39.651 12:54:37 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:39.651 12:54:37 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:39.651 12:54:37 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.651 12:54:37 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:39.651 12:54:37 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:39.651 12:54:37 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:39.651 12:54:37 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:39.651 12:54:37 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:39.651 12:54:37 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:39.651 12:54:37 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.651 12:54:37 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.651 12:54:37 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:39.651 12:54:37 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:39.651 12:54:37 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:39.651 12:54:37 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:39.651 12:54:37 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:39.651 12:54:37 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:39.652 12:54:37 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.652 12:54:37 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:39.652 12:54:37 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:39.652 12:54:37 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:39.652 12:54:37 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:39.652 12:54:37 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:39.652 12:54:37 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:39.652 12:54:37 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:39.652 12:54:37 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:39.652 12:54:37 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:39.652 12:54:37 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.652 12:54:37 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:39.652 12:54:37 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:39.652 12:54:37 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:39.652 12:54:37 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:39.652 12:54:37 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:39.652 12:54:37 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:39.652 12:54:37 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:39.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:39.652 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.652 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.652 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.652 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:39.652 12:54:38 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=86524 00:23:39.652 12:54:38 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:39.652 12:54:38 ftl -- ftl/ftl.sh@38 -- # waitforlisten 86524 00:23:39.652 12:54:38 ftl -- common/autotest_common.sh@835 -- # '[' -z 86524 ']' 00:23:39.652 12:54:38 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.652 12:54:38 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.652 12:54:38 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.652 12:54:38 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.652 12:54:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:39.652 [2024-12-05 12:54:38.470005] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:23:39.652 [2024-12-05 12:54:38.470137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86524 ] 00:23:39.652 [2024-12-05 12:54:38.620069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.652 [2024-12-05 12:54:38.643525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.652 12:54:39 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.652 12:54:39 ftl -- common/autotest_common.sh@868 -- # return 0 00:23:39.652 12:54:39 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:39.911 12:54:39 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:40.168 12:54:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:40.168 12:54:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:40.733 12:54:40 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:40.733 12:54:40 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:40.733 12:54:40 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@50 -- # break 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@63 -- # break 00:23:40.990 12:54:40 ftl -- ftl/ftl.sh@66 -- # killprocess 86524 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 86524 ']' 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@958 -- # kill -0 86524 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@959 -- # uname 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86524 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.990 killing process with pid 86524 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86524' 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@973 -- # kill 86524 00:23:40.990 12:54:40 ftl -- common/autotest_common.sh@978 -- # wait 86524 00:23:41.354 12:54:41 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:41.354 12:54:41 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:41.354 12:54:41 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:41.354 12:54:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:41.354 12:54:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:41.354 ************************************ 00:23:41.354 START TEST ftl_fio_basic 00:23:41.354 ************************************ 00:23:41.354 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:41.354 * Looking for test storage... 00:23:41.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:41.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.613 --rc genhtml_branch_coverage=1 00:23:41.613 --rc genhtml_function_coverage=1 00:23:41.613 --rc genhtml_legend=1 00:23:41.613 --rc geninfo_all_blocks=1 00:23:41.613 --rc geninfo_unexecuted_blocks=1 00:23:41.613 00:23:41.613 ' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:41.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.613 --rc genhtml_branch_coverage=1 00:23:41.613 --rc genhtml_function_coverage=1 00:23:41.613 --rc genhtml_legend=1 00:23:41.613 --rc geninfo_all_blocks=1 00:23:41.613 --rc geninfo_unexecuted_blocks=1 00:23:41.613 00:23:41.613 ' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:41.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.613 --rc genhtml_branch_coverage=1 00:23:41.613 --rc genhtml_function_coverage=1 00:23:41.613 --rc genhtml_legend=1 00:23:41.613 --rc geninfo_all_blocks=1 00:23:41.613 --rc geninfo_unexecuted_blocks=1 00:23:41.613 00:23:41.613 ' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:41.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.613 --rc genhtml_branch_coverage=1 00:23:41.613 --rc genhtml_function_coverage=1 00:23:41.613 --rc genhtml_legend=1 00:23:41.613 --rc geninfo_all_blocks=1 00:23:41.613 --rc geninfo_unexecuted_blocks=1 00:23:41.613 00:23:41.613 ' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:41.613 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:41.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86645 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86645 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86645 ']' 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.614 12:54:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:41.614 [2024-12-05 12:54:41.364955] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:23:41.614 [2024-12-05 12:54:41.365088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86645 ] 00:23:41.871 [2024-12-05 12:54:41.515830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:41.872 [2024-12-05 12:54:41.541767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.872 [2024-12-05 12:54:41.542026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.872 [2024-12-05 12:54:41.542113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.436 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.436 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:23:42.436 12:54:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:42.436 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:42.436 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:42.436 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:42.436 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:42.436 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:42.694 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:42.694 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:42.694 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:42.694 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:42.694 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:42.694 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:42.694 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:42.694 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:42.952 { 00:23:42.952 "name": "nvme0n1", 00:23:42.952 "aliases": [ 00:23:42.952 "849b0c94-e019-4686-9d04-27e5cfe873f4" 00:23:42.952 ], 00:23:42.952 "product_name": "NVMe disk", 00:23:42.952 "block_size": 4096, 00:23:42.952 "num_blocks": 1310720, 00:23:42.952 "uuid": "849b0c94-e019-4686-9d04-27e5cfe873f4", 00:23:42.952 "numa_id": -1, 00:23:42.952 "assigned_rate_limits": { 00:23:42.952 "rw_ios_per_sec": 0, 00:23:42.952 "rw_mbytes_per_sec": 0, 00:23:42.952 "r_mbytes_per_sec": 0, 00:23:42.952 "w_mbytes_per_sec": 0 00:23:42.952 }, 00:23:42.952 "claimed": false, 00:23:42.952 "zoned": false, 00:23:42.952 "supported_io_types": { 00:23:42.952 "read": true, 00:23:42.952 "write": true, 00:23:42.952 "unmap": true, 00:23:42.952 "flush": true, 00:23:42.952 "reset": true, 00:23:42.952 "nvme_admin": true, 00:23:42.952 "nvme_io": true, 00:23:42.952 "nvme_io_md": false, 00:23:42.952 "write_zeroes": true, 00:23:42.952 "zcopy": false, 00:23:42.952 "get_zone_info": false, 00:23:42.952 "zone_management": false, 00:23:42.952 "zone_append": false, 00:23:42.952 "compare": true, 00:23:42.952 "compare_and_write": false, 00:23:42.952 "abort": true, 00:23:42.952 "seek_hole": false, 00:23:42.952 "seek_data": false, 00:23:42.952 "copy": true, 00:23:42.952 "nvme_iov_md": false 00:23:42.952 }, 00:23:42.952 "driver_specific": { 00:23:42.952 "nvme": [ 00:23:42.952 { 00:23:42.952 "pci_address": "0000:00:11.0", 00:23:42.952 "trid": { 00:23:42.952 "trtype": "PCIe", 00:23:42.952 "traddr": "0000:00:11.0" 00:23:42.952 }, 00:23:42.952 "ctrlr_data": { 00:23:42.952 "cntlid": 0, 00:23:42.952 "vendor_id": "0x1b36", 00:23:42.952 "model_number": "QEMU NVMe Ctrl", 00:23:42.952 "serial_number": "12341", 00:23:42.952 "firmware_revision": "8.0.0", 00:23:42.952 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:42.952 "oacs": { 00:23:42.952 "security": 0, 00:23:42.952 "format": 1, 00:23:42.952 "firmware": 0, 00:23:42.952 "ns_manage": 1 00:23:42.952 }, 00:23:42.952 "multi_ctrlr": false, 00:23:42.952 "ana_reporting": false 00:23:42.952 }, 00:23:42.952 "vs": { 00:23:42.952 "nvme_version": "1.4" 00:23:42.952 }, 00:23:42.952 "ns_data": { 00:23:42.952 "id": 1, 00:23:42.952 "can_share": false 00:23:42.952 } 00:23:42.952 } 00:23:42.952 ], 00:23:42.952 "mp_policy": "active_passive" 00:23:42.952 } 00:23:42.952 } 00:23:42.952 ]' 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:42.952 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:43.211 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:43.211 12:54:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:43.469 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=3b75b8c6-c660-41bf-a0ca-4a6ce443403c 00:23:43.469 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3b75b8c6-c660-41bf-a0ca-4a6ce443403c 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:43.727 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:43.985 { 00:23:43.985 "name": "4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca", 00:23:43.985 "aliases": [ 00:23:43.985 "lvs/nvme0n1p0" 00:23:43.985 ], 00:23:43.985 "product_name": "Logical Volume", 00:23:43.985 "block_size": 4096, 00:23:43.985 "num_blocks": 26476544, 00:23:43.985 "uuid": "4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca", 00:23:43.985 "assigned_rate_limits": { 00:23:43.985 "rw_ios_per_sec": 0, 00:23:43.985 "rw_mbytes_per_sec": 0, 00:23:43.985 "r_mbytes_per_sec": 0, 00:23:43.985 "w_mbytes_per_sec": 0 00:23:43.985 }, 00:23:43.985 "claimed": false, 00:23:43.985 "zoned": false, 00:23:43.985 "supported_io_types": { 00:23:43.985 "read": true, 00:23:43.985 "write": true, 00:23:43.985 "unmap": true, 00:23:43.985 "flush": false, 00:23:43.985 "reset": true, 00:23:43.985 "nvme_admin": false, 00:23:43.985 "nvme_io": false, 00:23:43.985 "nvme_io_md": false, 00:23:43.985 "write_zeroes": true, 00:23:43.985 "zcopy": false, 00:23:43.985 "get_zone_info": false, 00:23:43.985 "zone_management": false, 00:23:43.985 "zone_append": false, 00:23:43.985 "compare": false, 00:23:43.985 "compare_and_write": false, 00:23:43.985 "abort": false, 00:23:43.985 "seek_hole": true, 00:23:43.985 "seek_data": true, 00:23:43.985 "copy": false, 00:23:43.985 "nvme_iov_md": false 00:23:43.985 }, 00:23:43.985 "driver_specific": { 00:23:43.985 "lvol": { 00:23:43.985 "lvol_store_uuid": "3b75b8c6-c660-41bf-a0ca-4a6ce443403c", 00:23:43.985 "base_bdev": "nvme0n1", 00:23:43.985 "thin_provision": true, 00:23:43.985 "num_allocated_clusters": 0, 00:23:43.985 "snapshot": false, 00:23:43.985 "clone": false, 00:23:43.985 "esnap_clone": false 00:23:43.985 } 00:23:43.985 } 00:23:43.985 } 00:23:43.985 ]' 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:43.985 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:44.243 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:44.243 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:44.243 12:54:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:44.243 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:44.243 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:44.243 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:44.243 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:44.243 12:54:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:44.501 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:44.501 { 00:23:44.501 "name": "4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca", 00:23:44.501 "aliases": [ 00:23:44.501 "lvs/nvme0n1p0" 00:23:44.501 ], 00:23:44.501 "product_name": "Logical Volume", 00:23:44.501 "block_size": 4096, 00:23:44.501 "num_blocks": 26476544, 00:23:44.501 "uuid": "4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca", 00:23:44.501 "assigned_rate_limits": { 00:23:44.501 "rw_ios_per_sec": 0, 00:23:44.501 "rw_mbytes_per_sec": 0, 00:23:44.501 "r_mbytes_per_sec": 0, 00:23:44.501 "w_mbytes_per_sec": 0 00:23:44.501 }, 00:23:44.501 "claimed": false, 00:23:44.501 "zoned": false, 00:23:44.501 "supported_io_types": { 00:23:44.501 "read": true, 00:23:44.501 "write": true, 00:23:44.501 "unmap": true, 00:23:44.501 "flush": false, 00:23:44.501 "reset": true, 00:23:44.501 "nvme_admin": false, 00:23:44.501 "nvme_io": false, 00:23:44.501 "nvme_io_md": false, 00:23:44.501 "write_zeroes": true, 00:23:44.501 "zcopy": false, 00:23:44.501 "get_zone_info": false, 00:23:44.501 "zone_management": false, 00:23:44.501 "zone_append": false, 00:23:44.501 "compare": false, 00:23:44.501 "compare_and_write": false, 00:23:44.501 "abort": false, 00:23:44.501 "seek_hole": true, 00:23:44.501 "seek_data": true, 00:23:44.501 "copy": false, 00:23:44.501 "nvme_iov_md": false 00:23:44.501 }, 00:23:44.501 "driver_specific": { 00:23:44.501 "lvol": { 00:23:44.501 "lvol_store_uuid": "3b75b8c6-c660-41bf-a0ca-4a6ce443403c", 00:23:44.501 "base_bdev": "nvme0n1", 00:23:44.501 "thin_provision": true, 00:23:44.501 "num_allocated_clusters": 0, 00:23:44.501 "snapshot": false, 00:23:44.501 "clone": false, 00:23:44.501 "esnap_clone": false 00:23:44.501 } 00:23:44.501 } 00:23:44.501 } 00:23:44.502 ]' 00:23:44.502 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:44.502 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:44.502 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:44.502 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:44.502 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:44.502 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:44.502 12:54:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:44.502 12:54:44 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:44.761 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca 00:23:44.761 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:44.761 { 00:23:44.761 "name": "4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca", 00:23:44.761 "aliases": [ 00:23:44.761 "lvs/nvme0n1p0" 00:23:44.761 ], 00:23:44.761 "product_name": "Logical Volume", 00:23:44.761 "block_size": 4096, 00:23:44.761 "num_blocks": 26476544, 00:23:44.761 "uuid": "4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca", 00:23:44.761 "assigned_rate_limits": { 00:23:44.761 "rw_ios_per_sec": 0, 00:23:44.761 "rw_mbytes_per_sec": 0, 00:23:44.761 "r_mbytes_per_sec": 0, 00:23:44.761 "w_mbytes_per_sec": 0 00:23:44.761 }, 00:23:44.761 "claimed": false, 00:23:44.761 "zoned": false, 00:23:44.761 "supported_io_types": { 00:23:44.761 "read": true, 00:23:44.761 "write": true, 00:23:44.761 "unmap": true, 00:23:44.761 "flush": false, 00:23:44.761 "reset": true, 00:23:44.761 "nvme_admin": false, 00:23:44.761 "nvme_io": false, 00:23:44.761 "nvme_io_md": false, 00:23:44.761 "write_zeroes": true, 00:23:44.761 "zcopy": false, 00:23:44.761 "get_zone_info": false, 00:23:44.761 "zone_management": false, 00:23:44.761 "zone_append": false, 00:23:44.761 "compare": false, 00:23:44.761 "compare_and_write": false, 00:23:44.761 "abort": false, 00:23:44.762 "seek_hole": true, 00:23:44.762 "seek_data": true, 00:23:44.762 "copy": false, 00:23:44.762 "nvme_iov_md": false 00:23:44.762 }, 00:23:44.762 "driver_specific": { 00:23:44.762 "lvol": { 00:23:44.762 "lvol_store_uuid": "3b75b8c6-c660-41bf-a0ca-4a6ce443403c", 00:23:44.762 "base_bdev": "nvme0n1", 00:23:44.762 "thin_provision": true, 00:23:44.762 "num_allocated_clusters": 0, 00:23:44.762 "snapshot": false, 00:23:44.762 "clone": false, 00:23:44.762 "esnap_clone": false 00:23:44.762 } 00:23:44.762 } 00:23:44.762 } 00:23:44.762 ]' 00:23:44.762 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:44.762 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:44.762 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:45.021 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:45.021 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:45.021 12:54:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:45.021 12:54:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:45.021 12:54:44 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:45.021 12:54:44 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca -c nvc0n1p0 --l2p_dram_limit 60 00:23:45.021 [2024-12-05 12:54:44.817352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.817423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:45.021 [2024-12-05 12:54:44.817440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:45.021 [2024-12-05 12:54:44.817452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.817528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.817540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:45.021 [2024-12-05 12:54:44.817549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:45.021 [2024-12-05 12:54:44.817561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.817609] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:45.021 [2024-12-05 12:54:44.817968] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:45.021 [2024-12-05 12:54:44.817990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.818000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:45.021 [2024-12-05 12:54:44.818018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:23:45.021 [2024-12-05 12:54:44.818028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.818163] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1815c860-ac4e-44ec-8b66-5fcd92f86503 00:23:45.021 [2024-12-05 12:54:44.819587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.819629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:45.021 [2024-12-05 12:54:44.819642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:45.021 [2024-12-05 12:54:44.819649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.826721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.826754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:45.021 [2024-12-05 12:54:44.826767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.011 ms 00:23:45.021 [2024-12-05 12:54:44.826778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.826890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.826900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:45.021 [2024-12-05 12:54:44.826913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:23:45.021 [2024-12-05 12:54:44.826920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.826988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.826998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:45.021 [2024-12-05 12:54:44.827008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:45.021 [2024-12-05 12:54:44.827016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.827052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:45.021 [2024-12-05 12:54:44.828838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.828872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:45.021 [2024-12-05 12:54:44.828881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:23:45.021 [2024-12-05 12:54:44.828891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.828935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.828946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:45.021 [2024-12-05 12:54:44.828954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:45.021 [2024-12-05 12:54:44.828976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.021 [2024-12-05 12:54:44.829004] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:45.021 [2024-12-05 12:54:44.829179] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:45.021 [2024-12-05 12:54:44.829203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:45.021 [2024-12-05 12:54:44.829217] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:45.021 [2024-12-05 12:54:44.829227] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:45.021 [2024-12-05 12:54:44.829240] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:45.021 [2024-12-05 12:54:44.829248] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:45.021 [2024-12-05 12:54:44.829257] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:45.021 [2024-12-05 12:54:44.829264] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:45.021 [2024-12-05 12:54:44.829273] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:45.021 [2024-12-05 12:54:44.829290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.021 [2024-12-05 12:54:44.829300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:45.021 [2024-12-05 12:54:44.829307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:23:45.022 [2024-12-05 12:54:44.829317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.022 [2024-12-05 12:54:44.829405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.022 [2024-12-05 12:54:44.829420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:45.022 [2024-12-05 12:54:44.829427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:45.022 [2024-12-05 12:54:44.829437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.022 [2024-12-05 12:54:44.829542] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:45.022 [2024-12-05 12:54:44.829563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:45.022 [2024-12-05 12:54:44.829573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.022 [2024-12-05 12:54:44.829584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:45.022 [2024-12-05 12:54:44.829601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:45.022 [2024-12-05 12:54:44.829619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:45.022 [2024-12-05 12:54:44.829627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.022 [2024-12-05 12:54:44.829645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:45.022 [2024-12-05 12:54:44.829654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:45.022 [2024-12-05 12:54:44.829661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:45.022 [2024-12-05 12:54:44.829673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:45.022 [2024-12-05 12:54:44.829693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:45.022 [2024-12-05 12:54:44.829702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:45.022 [2024-12-05 12:54:44.829718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:45.022 [2024-12-05 12:54:44.829725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:45.022 [2024-12-05 12:54:44.829742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.022 [2024-12-05 12:54:44.829764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:45.022 [2024-12-05 12:54:44.829774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.022 [2024-12-05 12:54:44.829792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:45.022 [2024-12-05 12:54:44.829800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.022 [2024-12-05 12:54:44.829842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:45.022 [2024-12-05 12:54:44.829854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:45.022 [2024-12-05 12:54:44.829871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:45.022 [2024-12-05 12:54:44.829879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.022 [2024-12-05 12:54:44.829897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:45.022 [2024-12-05 12:54:44.829906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:45.022 [2024-12-05 12:54:44.829915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:45.022 [2024-12-05 12:54:44.829924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:45.022 [2024-12-05 12:54:44.829931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:45.022 [2024-12-05 12:54:44.829939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:45.022 [2024-12-05 12:54:44.829955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:45.022 [2024-12-05 12:54:44.829962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.022 [2024-12-05 12:54:44.829970] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:45.022 [2024-12-05 12:54:44.829977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:45.022 [2024-12-05 12:54:44.829998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:45.022 [2024-12-05 12:54:44.830007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:45.022 [2024-12-05 12:54:44.830017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:45.022 [2024-12-05 12:54:44.830024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:45.022 [2024-12-05 12:54:44.830033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:45.022 [2024-12-05 12:54:44.830040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:45.022 [2024-12-05 12:54:44.830048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:45.022 [2024-12-05 12:54:44.830055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:45.022 [2024-12-05 12:54:44.830065] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:45.022 [2024-12-05 12:54:44.830077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.022 [2024-12-05 12:54:44.830088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:45.022 [2024-12-05 12:54:44.830096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:45.022 [2024-12-05 12:54:44.830105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:45.022 [2024-12-05 12:54:44.830112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:45.022 [2024-12-05 12:54:44.830121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:45.022 [2024-12-05 12:54:44.830128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:45.022 [2024-12-05 12:54:44.830138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:45.022 [2024-12-05 12:54:44.830145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:45.022 [2024-12-05 12:54:44.830154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:45.022 [2024-12-05 12:54:44.830161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:45.022 [2024-12-05 12:54:44.830170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:45.022 [2024-12-05 12:54:44.830177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:45.022 [2024-12-05 12:54:44.830186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:45.022 [2024-12-05 12:54:44.830193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:45.022 [2024-12-05 12:54:44.830202] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:45.022 [2024-12-05 12:54:44.830210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:45.022 [2024-12-05 12:54:44.830220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:45.022 [2024-12-05 12:54:44.830227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:45.022 [2024-12-05 12:54:44.830236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:45.022 [2024-12-05 12:54:44.830244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:45.022 [2024-12-05 12:54:44.830253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.022 [2024-12-05 12:54:44.830261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:45.022 [2024-12-05 12:54:44.830273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:23:45.022 [2024-12-05 12:54:44.830280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.022 [2024-12-05 12:54:44.830336] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:45.022 [2024-12-05 12:54:44.830349] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:48.298 [2024-12-05 12:54:47.879685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.879782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:48.298 [2024-12-05 12:54:47.879824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3049.321 ms 00:23:48.298 [2024-12-05 12:54:47.879841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.891792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.891866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:48.298 [2024-12-05 12:54:47.891886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.839 ms 00:23:48.298 [2024-12-05 12:54:47.891895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.892038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.892049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:48.298 [2024-12-05 12:54:47.892060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:48.298 [2024-12-05 12:54:47.892067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.910658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.910731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:48.298 [2024-12-05 12:54:47.910753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.530 ms 00:23:48.298 [2024-12-05 12:54:47.910779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.910868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.910883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.298 [2024-12-05 12:54:47.910897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:48.298 [2024-12-05 12:54:47.910908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.911417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.911457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.298 [2024-12-05 12:54:47.911478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:23:48.298 [2024-12-05 12:54:47.911490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.911679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.911720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.298 [2024-12-05 12:54:47.911748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:23:48.298 [2024-12-05 12:54:47.911759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.919574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.919617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.298 [2024-12-05 12:54:47.919633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.771 ms 00:23:48.298 [2024-12-05 12:54:47.919659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.928830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:48.298 [2024-12-05 12:54:47.946279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.946337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:48.298 [2024-12-05 12:54:47.946363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.520 ms 00:23:48.298 [2024-12-05 12:54:47.946373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.987259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.987338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:48.298 [2024-12-05 12:54:47.987353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.836 ms 00:23:48.298 [2024-12-05 12:54:47.987367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.987565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.987587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:48.298 [2024-12-05 12:54:47.987597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:23:48.298 [2024-12-05 12:54:47.987606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.990745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.990793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:48.298 [2024-12-05 12:54:47.990804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.078 ms 00:23:48.298 [2024-12-05 12:54:47.990826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.993272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.993309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:48.298 [2024-12-05 12:54:47.993320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:23:48.298 [2024-12-05 12:54:47.993331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:47.993636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.298 [2024-12-05 12:54:47.993657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:48.298 [2024-12-05 12:54:47.993676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:23:48.298 [2024-12-05 12:54:47.993688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.298 [2024-12-05 12:54:48.018906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.299 [2024-12-05 12:54:48.018971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:48.299 [2024-12-05 12:54:48.018984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.177 ms 00:23:48.299 [2024-12-05 12:54:48.018994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.299 [2024-12-05 12:54:48.023222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.299 [2024-12-05 12:54:48.023267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:48.299 [2024-12-05 12:54:48.023279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:23:48.299 [2024-12-05 12:54:48.023289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.299 [2024-12-05 12:54:48.026229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.299 [2024-12-05 12:54:48.026266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:48.299 [2024-12-05 12:54:48.026277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:23:48.299 [2024-12-05 12:54:48.026287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.299 [2024-12-05 12:54:48.029335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.299 [2024-12-05 12:54:48.029472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:48.299 [2024-12-05 12:54:48.029488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:23:48.299 [2024-12-05 12:54:48.029500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.299 [2024-12-05 12:54:48.029545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.299 [2024-12-05 12:54:48.029558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:48.299 [2024-12-05 12:54:48.029566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:48.299 [2024-12-05 12:54:48.029576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.299 [2024-12-05 12:54:48.029665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.299 [2024-12-05 12:54:48.029681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:48.299 [2024-12-05 12:54:48.029689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:48.299 [2024-12-05 12:54:48.029699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.299 [2024-12-05 12:54:48.030743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3212.952 ms, result 0 00:23:48.299 { 00:23:48.299 "name": "ftl0", 00:23:48.299 "uuid": "1815c860-ac4e-44ec-8b66-5fcd92f86503" 00:23:48.299 } 00:23:48.299 12:54:48 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:48.299 12:54:48 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:48.299 12:54:48 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:48.299 12:54:48 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:48.299 12:54:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:48.299 12:54:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:48.299 12:54:48 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:48.556 12:54:48 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:48.814 [ 00:23:48.814 { 00:23:48.814 "name": "ftl0", 00:23:48.814 "aliases": [ 00:23:48.814 "1815c860-ac4e-44ec-8b66-5fcd92f86503" 00:23:48.814 ], 00:23:48.814 "product_name": "FTL disk", 00:23:48.814 "block_size": 4096, 00:23:48.814 "num_blocks": 20971520, 00:23:48.814 "uuid": "1815c860-ac4e-44ec-8b66-5fcd92f86503", 00:23:48.814 "assigned_rate_limits": { 00:23:48.814 "rw_ios_per_sec": 0, 00:23:48.814 "rw_mbytes_per_sec": 0, 00:23:48.814 "r_mbytes_per_sec": 0, 00:23:48.814 "w_mbytes_per_sec": 0 00:23:48.814 }, 00:23:48.814 "claimed": false, 00:23:48.814 "zoned": false, 00:23:48.814 "supported_io_types": { 00:23:48.814 "read": true, 00:23:48.814 "write": true, 00:23:48.814 "unmap": true, 00:23:48.814 "flush": true, 00:23:48.814 "reset": false, 00:23:48.814 "nvme_admin": false, 00:23:48.814 "nvme_io": false, 00:23:48.814 "nvme_io_md": false, 00:23:48.814 "write_zeroes": true, 00:23:48.814 "zcopy": false, 00:23:48.814 "get_zone_info": false, 00:23:48.814 "zone_management": false, 00:23:48.814 "zone_append": false, 00:23:48.814 "compare": false, 00:23:48.814 "compare_and_write": false, 00:23:48.814 "abort": false, 00:23:48.814 "seek_hole": false, 00:23:48.814 "seek_data": false, 00:23:48.814 "copy": false, 00:23:48.814 "nvme_iov_md": false 00:23:48.814 }, 00:23:48.814 "driver_specific": { 00:23:48.814 "ftl": { 00:23:48.814 "base_bdev": "4dd915a2-30a4-4c38-9cf0-a3e1e3d3f1ca", 00:23:48.814 "cache": "nvc0n1p0" 00:23:48.814 } 00:23:48.814 } 00:23:48.814 } 00:23:48.814 ] 00:23:48.814 12:54:48 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:48.814 12:54:48 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:48.814 12:54:48 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:49.079 12:54:48 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:49.079 12:54:48 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:49.337 [2024-12-05 12:54:48.967595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.967835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:49.337 [2024-12-05 12:54:48.967882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:49.337 [2024-12-05 12:54:48.967892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.967935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:49.337 [2024-12-05 12:54:48.968522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.968561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:49.337 [2024-12-05 12:54:48.968574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:23:49.337 [2024-12-05 12:54:48.968606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.969014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.969042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:49.337 [2024-12-05 12:54:48.969053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:23:49.337 [2024-12-05 12:54:48.969064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.972292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.972315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:49.337 [2024-12-05 12:54:48.972326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:23:49.337 [2024-12-05 12:54:48.972340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.978596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.978645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:49.337 [2024-12-05 12:54:48.978655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.233 ms 00:23:49.337 [2024-12-05 12:54:48.978665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.980344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.980388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:49.337 [2024-12-05 12:54:48.980398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:23:49.337 [2024-12-05 12:54:48.980407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.984633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.984672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:49.337 [2024-12-05 12:54:48.984685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.191 ms 00:23:49.337 [2024-12-05 12:54:48.984695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.984857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.984871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:49.337 [2024-12-05 12:54:48.984880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:23:49.337 [2024-12-05 12:54:48.984889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.986287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.986418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:49.337 [2024-12-05 12:54:48.986433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:23:49.337 [2024-12-05 12:54:48.986442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.987505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.987539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:49.337 [2024-12-05 12:54:48.987547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:23:49.337 [2024-12-05 12:54:48.987556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.988400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.337 [2024-12-05 12:54:48.988434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:49.337 [2024-12-05 12:54:48.988443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:23:49.337 [2024-12-05 12:54:48.988451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.337 [2024-12-05 12:54:48.989376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.338 [2024-12-05 12:54:48.989410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:49.338 [2024-12-05 12:54:48.989419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:23:49.338 [2024-12-05 12:54:48.989428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.338 [2024-12-05 12:54:48.989459] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:49.338 [2024-12-05 12:54:48.989476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.989994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:49.338 [2024-12-05 12:54:48.990754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:49.339 [2024-12-05 12:54:48.990906] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:49.339 [2024-12-05 12:54:48.990926] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1815c860-ac4e-44ec-8b66-5fcd92f86503 00:23:49.339 [2024-12-05 12:54:48.990936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:49.339 [2024-12-05 12:54:48.990943] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:49.339 [2024-12-05 12:54:48.990954] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:49.339 [2024-12-05 12:54:48.990961] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:49.339 [2024-12-05 12:54:48.990970] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:49.339 [2024-12-05 12:54:48.990977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:49.339 [2024-12-05 12:54:48.990986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:49.339 [2024-12-05 12:54:48.990993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:49.339 [2024-12-05 12:54:48.991001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:49.339 [2024-12-05 12:54:48.991009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.339 [2024-12-05 12:54:48.991019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:49.339 [2024-12-05 12:54:48.991027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:23:49.339 [2024-12-05 12:54:48.991036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:48.993027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.339 [2024-12-05 12:54:48.993073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:49.339 [2024-12-05 12:54:48.993084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.964 ms 00:23:49.339 [2024-12-05 12:54:48.993095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:48.993219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.339 [2024-12-05 12:54:48.993232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:49.339 [2024-12-05 12:54:48.993242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:49.339 [2024-12-05 12:54:48.993254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:48.999802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:48.999852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:49.339 [2024-12-05 12:54:48.999862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:48.999872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:48.999933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:48.999944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:49.339 [2024-12-05 12:54:48.999952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:48.999964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.000043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.000068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:49.339 [2024-12-05 12:54:49.000076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.000085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.000107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.000116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:49.339 [2024-12-05 12:54:49.000124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.000133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.012627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.012684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.339 [2024-12-05 12:54:49.012695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.012717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.022691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.022751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.339 [2024-12-05 12:54:49.022763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.022776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.022877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.022892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:49.339 [2024-12-05 12:54:49.022901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.022910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.022967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.022978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:49.339 [2024-12-05 12:54:49.022987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.022997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.023096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.023108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:49.339 [2024-12-05 12:54:49.023116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.023125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.023169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.023181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:49.339 [2024-12-05 12:54:49.023189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.023198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.023242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.023254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:49.339 [2024-12-05 12:54:49.023262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.023270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.023320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.339 [2024-12-05 12:54:49.023334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:49.339 [2024-12-05 12:54:49.023342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.339 [2024-12-05 12:54:49.023362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.339 [2024-12-05 12:54:49.023535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.917 ms, result 0 00:23:49.339 true 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86645 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86645 ']' 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86645 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86645 00:23:49.339 killing process with pid 86645 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86645' 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86645 00:23:49.339 12:54:49 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86645 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:55.945 12:54:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:55.945 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:55.945 fio-3.35 00:23:55.945 Starting 1 thread 00:23:58.543 00:23:58.543 test: (groupid=0, jobs=1): err= 0: pid=86814: Thu Dec 5 12:54:58 2024 00:23:58.543 read: IOPS=1326, BW=88.1MiB/s (92.4MB/s)(255MiB/2890msec) 00:23:58.543 slat (nsec): min=3904, max=29917, avg=5297.64, stdev=2658.21 00:23:58.543 clat (usec): min=227, max=921, avg=338.38, stdev=44.79 00:23:58.543 lat (usec): min=232, max=934, avg=343.68, stdev=45.94 00:23:58.543 clat percentiles (usec): 00:23:58.543 | 1.00th=[ 269], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 322], 00:23:58.543 | 30.00th=[ 326], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 330], 00:23:58.543 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 388], 95.00th=[ 433], 00:23:58.543 | 99.00th=[ 519], 99.50th=[ 570], 99.90th=[ 709], 99.95th=[ 750], 00:23:58.543 | 99.99th=[ 922] 00:23:58.543 write: IOPS=1336, BW=88.7MiB/s (93.0MB/s)(256MiB/2886msec); 0 zone resets 00:23:58.543 slat (usec): min=17, max=102, avg=21.26, stdev= 5.09 00:23:58.543 clat (usec): min=279, max=999, avg=372.78, stdev=64.42 00:23:58.543 lat (usec): min=299, max=1041, avg=394.04, stdev=65.11 00:23:58.543 clat percentiles (usec): 00:23:58.543 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 338], 20.00th=[ 347], 00:23:58.543 | 30.00th=[ 351], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:23:58.543 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 424], 95.00th=[ 490], 00:23:58.543 | 99.00th=[ 660], 99.50th=[ 734], 99.90th=[ 955], 99.95th=[ 979], 00:23:58.543 | 99.99th=[ 1004] 00:23:58.543 bw ( KiB/s): min=88128, max=93568, per=100.00%, avg=91201.60, stdev=2025.44, samples=5 00:23:58.543 iops : min= 1296, max= 1376, avg=1341.20, stdev=29.79, samples=5 00:23:58.543 lat (usec) : 250=0.03%, 500=96.88%, 750=2.87%, 1000=0.22% 00:23:58.543 cpu : usr=98.55%, sys=0.42%, ctx=10, majf=0, minf=1181 00:23:58.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.543 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:58.543 00:23:58.543 Run status group 0 (all jobs): 00:23:58.543 READ: bw=88.1MiB/s (92.4MB/s), 88.1MiB/s-88.1MiB/s (92.4MB/s-92.4MB/s), io=255MiB (267MB), run=2890-2890msec 00:23:58.543 WRITE: bw=88.7MiB/s (93.0MB/s), 88.7MiB/s-88.7MiB/s (93.0MB/s-93.0MB/s), io=256MiB (269MB), run=2886-2886msec 00:23:59.475 ----------------------------------------------------- 00:23:59.475 Suppressions used: 00:23:59.475 count bytes template 00:23:59.475 1 5 /usr/src/fio/parse.c 00:23:59.475 1 8 libtcmalloc_minimal.so 00:23:59.475 1 904 libcrypto.so 00:23:59.475 ----------------------------------------------------- 00:23:59.475 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:59.475 12:54:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:59.733 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:59.733 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:59.733 fio-3.35 00:23:59.733 Starting 2 threads 00:24:26.291 00:24:26.291 first_half: (groupid=0, jobs=1): err= 0: pid=86902: Thu Dec 5 12:55:21 2024 00:24:26.291 read: IOPS=3018, BW=11.8MiB/s (12.4MB/s)(256MiB/21679msec) 00:24:26.291 slat (nsec): min=3135, max=24876, avg=4130.55, stdev=804.36 00:24:26.291 clat (msec): min=6, max=291, avg=35.95, stdev=22.24 00:24:26.291 lat (msec): min=6, max=291, avg=35.96, stdev=22.24 00:24:26.291 clat percentiles (msec): 00:24:26.291 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 30], 00:24:26.291 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:24:26.291 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 39], 95.00th=[ 69], 00:24:26.291 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 197], 99.95th=[ 257], 00:24:26.291 | 99.99th=[ 284] 00:24:26.291 write: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(256MiB/20676msec); 0 zone resets 00:24:26.291 slat (usec): min=3, max=1534, avg= 5.62, stdev= 6.61 00:24:26.291 clat (usec): min=324, max=38529, avg=6426.00, stdev=6371.39 00:24:26.291 lat (usec): min=329, max=38534, avg=6431.62, stdev=6371.66 00:24:26.291 clat percentiles (usec): 00:24:26.291 | 1.00th=[ 701], 5.00th=[ 840], 10.00th=[ 1123], 20.00th=[ 2442], 00:24:26.291 | 30.00th=[ 3490], 40.00th=[ 4424], 50.00th=[ 5080], 60.00th=[ 5538], 00:24:26.291 | 70.00th=[ 5932], 80.00th=[ 6783], 90.00th=[13435], 95.00th=[21365], 00:24:26.291 | 99.00th=[33162], 99.50th=[34866], 99.90th=[36439], 99.95th=[36963], 00:24:26.291 | 99.99th=[38011] 00:24:26.291 bw ( KiB/s): min= 3256, max=55792, per=100.00%, avg=26214.40, stdev=15519.21, samples=20 00:24:26.291 iops : min= 814, max=13948, avg=6553.60, stdev=3879.80, samples=20 00:24:26.291 lat (usec) : 500=0.04%, 750=1.00%, 1000=3.23% 00:24:26.291 lat (msec) : 2=4.09%, 4=9.28%, 10=25.56%, 20=5.67%, 50=47.85% 00:24:26.291 lat (msec) : 100=1.61%, 250=1.64%, 500=0.03% 00:24:26.291 cpu : usr=99.30%, sys=0.11%, ctx=62, majf=0, minf=5551 00:24:26.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:26.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.291 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.291 issued rwts: total=65428,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.291 second_half: (groupid=0, jobs=1): err= 0: pid=86903: Thu Dec 5 12:55:21 2024 00:24:26.291 read: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(256MiB/21560msec) 00:24:26.291 slat (usec): min=3, max=631, avg= 5.10, stdev= 3.28 00:24:26.291 clat (msec): min=10, max=210, avg=36.13, stdev=19.30 00:24:26.291 lat (msec): min=10, max=210, avg=36.13, stdev=19.30 00:24:26.291 clat percentiles (msec): 00:24:26.291 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:24:26.291 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:24:26.291 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 64], 00:24:26.291 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 171], 00:24:26.291 | 99.99th=[ 203] 00:24:26.291 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(256MiB/21440msec); 0 zone resets 00:24:26.291 slat (usec): min=3, max=1194, avg= 6.62, stdev= 5.65 00:24:26.291 clat (usec): min=377, max=43028, avg=5988.13, stdev=4775.92 00:24:26.291 lat (usec): min=391, max=43035, avg=5994.75, stdev=4776.04 00:24:26.291 clat percentiles (usec): 00:24:26.291 | 1.00th=[ 783], 5.00th=[ 1237], 10.00th=[ 2245], 20.00th=[ 2999], 00:24:26.291 | 30.00th=[ 3654], 40.00th=[ 4359], 50.00th=[ 5014], 60.00th=[ 5538], 00:24:26.291 | 70.00th=[ 5866], 80.00th=[ 6849], 90.00th=[11994], 95.00th=[13960], 00:24:26.291 | 99.00th=[28705], 99.50th=[32375], 99.90th=[39584], 99.95th=[41157], 00:24:26.291 | 99.99th=[42730] 00:24:26.291 bw ( KiB/s): min= 2056, max=42632, per=97.45%, avg=23830.91, stdev=13208.44, samples=22 00:24:26.291 iops : min= 514, max=10658, avg=5957.82, stdev=3302.12, samples=22 00:24:26.291 lat (usec) : 500=0.02%, 750=0.31%, 1000=1.48% 00:24:26.291 lat (msec) : 2=2.30%, 4=13.58%, 10=24.76%, 20=6.66%, 50=47.76% 00:24:26.291 lat (msec) : 100=1.71%, 250=1.42% 00:24:26.291 cpu : usr=98.93%, sys=0.27%, ctx=49, majf=0, minf=5589 00:24:26.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:26.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.291 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.291 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.291 00:24:26.291 Run status group 0 (all jobs): 00:24:26.291 READ: bw=23.6MiB/s (24.7MB/s), 11.8MiB/s-11.9MiB/s (12.4MB/s-12.4MB/s), io=511MiB (536MB), run=21560-21679msec 00:24:26.291 WRITE: bw=23.9MiB/s (25.0MB/s), 11.9MiB/s-12.4MiB/s (12.5MB/s-13.0MB/s), io=512MiB (537MB), run=20676-21440msec 00:24:26.291 ----------------------------------------------------- 00:24:26.291 Suppressions used: 00:24:26.291 count bytes template 00:24:26.291 2 10 /usr/src/fio/parse.c 00:24:26.291 3 288 /usr/src/fio/iolog.c 00:24:26.291 1 8 libtcmalloc_minimal.so 00:24:26.291 1 904 libcrypto.so 00:24:26.291 ----------------------------------------------------- 00:24:26.291 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:26.291 12:55:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:26.291 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:26.291 fio-3.35 00:24:26.291 Starting 1 thread 00:24:38.540 00:24:38.540 test: (groupid=0, jobs=1): err= 0: pid=87178: Thu Dec 5 12:55:36 2024 00:24:38.540 read: IOPS=7826, BW=30.6MiB/s (32.1MB/s)(255MiB/8331msec) 00:24:38.540 slat (nsec): min=3120, max=29289, avg=3811.48, stdev=702.37 00:24:38.540 clat (usec): min=534, max=31096, avg=16346.18, stdev=1735.03 00:24:38.540 lat (usec): min=538, max=31100, avg=16349.99, stdev=1735.04 00:24:38.540 clat percentiles (usec): 00:24:38.540 | 1.00th=[14877], 5.00th=[15139], 10.00th=[15139], 20.00th=[15401], 00:24:38.540 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15926], 60.00th=[16057], 00:24:38.540 | 70.00th=[16188], 80.00th=[16450], 90.00th=[18220], 95.00th=[19792], 00:24:38.540 | 99.00th=[23200], 99.50th=[24511], 99.90th=[28443], 99.95th=[29754], 00:24:38.540 | 99.99th=[30540] 00:24:38.540 write: IOPS=16.3k, BW=63.7MiB/s (66.8MB/s)(256MiB/4018msec); 0 zone resets 00:24:38.540 slat (usec): min=4, max=385, avg= 6.61, stdev= 2.81 00:24:38.540 clat (usec): min=512, max=49069, avg=7802.47, stdev=9728.38 00:24:38.540 lat (usec): min=519, max=49074, avg=7809.08, stdev=9728.35 00:24:38.540 clat percentiles (usec): 00:24:38.540 | 1.00th=[ 627], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 848], 00:24:38.540 | 30.00th=[ 1020], 40.00th=[ 1434], 50.00th=[ 5407], 60.00th=[ 6128], 00:24:38.540 | 70.00th=[ 7046], 80.00th=[ 8356], 90.00th=[28181], 95.00th=[29754], 00:24:38.540 | 99.00th=[33424], 99.50th=[36439], 99.90th=[39060], 99.95th=[39584], 00:24:38.540 | 99.99th=[46400] 00:24:38.540 bw ( KiB/s): min= 1016, max=85464, per=89.29%, avg=58254.22, stdev=23803.57, samples=9 00:24:38.540 iops : min= 254, max=21366, avg=14563.56, stdev=5950.89, samples=9 00:24:38.540 lat (usec) : 750=5.76%, 1000=8.63% 00:24:38.540 lat (msec) : 2=6.21%, 4=0.56%, 10=20.84%, 20=47.70%, 50=10.29% 00:24:38.540 cpu : usr=99.16%, sys=0.17%, ctx=17, majf=0, minf=5577 00:24:38.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:38.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:38.540 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:38.540 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:38.540 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:38.540 00:24:38.540 Run status group 0 (all jobs): 00:24:38.540 READ: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=255MiB (267MB), run=8331-8331msec 00:24:38.540 WRITE: bw=63.7MiB/s (66.8MB/s), 63.7MiB/s-63.7MiB/s (66.8MB/s-66.8MB/s), io=256MiB (268MB), run=4018-4018msec 00:24:38.540 ----------------------------------------------------- 00:24:38.540 Suppressions used: 00:24:38.540 count bytes template 00:24:38.540 1 5 /usr/src/fio/parse.c 00:24:38.540 2 192 /usr/src/fio/iolog.c 00:24:38.540 1 8 libtcmalloc_minimal.so 00:24:38.540 1 904 libcrypto.so 00:24:38.540 ----------------------------------------------------- 00:24:38.540 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:38.540 Remove shared memory files 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69582 /dev/shm/spdk_tgt_trace.pid85587 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:38.540 ************************************ 00:24:38.540 END TEST ftl_fio_basic 00:24:38.540 ************************************ 00:24:38.540 00:24:38.540 real 0m55.975s 00:24:38.540 user 2m7.696s 00:24:38.540 sys 0m2.754s 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.540 12:55:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:38.540 12:55:37 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:38.540 12:55:37 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:38.540 12:55:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.540 12:55:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:38.540 ************************************ 00:24:38.540 START TEST ftl_bdevperf 00:24:38.540 ************************************ 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:38.540 * Looking for test storage... 00:24:38.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:38.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.540 --rc genhtml_branch_coverage=1 00:24:38.540 --rc genhtml_function_coverage=1 00:24:38.540 --rc genhtml_legend=1 00:24:38.540 --rc geninfo_all_blocks=1 00:24:38.540 --rc geninfo_unexecuted_blocks=1 00:24:38.540 00:24:38.540 ' 00:24:38.540 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:38.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.541 --rc genhtml_branch_coverage=1 00:24:38.541 --rc genhtml_function_coverage=1 00:24:38.541 --rc genhtml_legend=1 00:24:38.541 --rc geninfo_all_blocks=1 00:24:38.541 --rc geninfo_unexecuted_blocks=1 00:24:38.541 00:24:38.541 ' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:38.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.541 --rc genhtml_branch_coverage=1 00:24:38.541 --rc genhtml_function_coverage=1 00:24:38.541 --rc genhtml_legend=1 00:24:38.541 --rc geninfo_all_blocks=1 00:24:38.541 --rc geninfo_unexecuted_blocks=1 00:24:38.541 00:24:38.541 ' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:38.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.541 --rc genhtml_branch_coverage=1 00:24:38.541 --rc genhtml_function_coverage=1 00:24:38.541 --rc genhtml_legend=1 00:24:38.541 --rc geninfo_all_blocks=1 00:24:38.541 --rc geninfo_unexecuted_blocks=1 00:24:38.541 00:24:38.541 ' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=87394 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 87394 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 87394 ']' 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.541 12:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:38.541 [2024-12-05 12:55:37.384731] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:24:38.541 [2024-12-05 12:55:37.385683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87394 ] 00:24:38.541 [2024-12-05 12:55:37.555240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.541 [2024-12-05 12:55:37.581136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.541 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.541 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:38.541 12:55:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:38.541 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:38.541 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:38.541 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:38.541 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:38.541 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:38.799 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:38.799 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:38.799 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:38.799 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:38.799 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:38.799 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:38.799 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:38.799 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:39.057 { 00:24:39.057 "name": "nvme0n1", 00:24:39.057 "aliases": [ 00:24:39.057 "b4b0503c-ddb1-4e7e-a82c-569d149c7594" 00:24:39.057 ], 00:24:39.057 "product_name": "NVMe disk", 00:24:39.057 "block_size": 4096, 00:24:39.057 "num_blocks": 1310720, 00:24:39.057 "uuid": "b4b0503c-ddb1-4e7e-a82c-569d149c7594", 00:24:39.057 "numa_id": -1, 00:24:39.057 "assigned_rate_limits": { 00:24:39.057 "rw_ios_per_sec": 0, 00:24:39.057 "rw_mbytes_per_sec": 0, 00:24:39.057 "r_mbytes_per_sec": 0, 00:24:39.057 "w_mbytes_per_sec": 0 00:24:39.057 }, 00:24:39.057 "claimed": true, 00:24:39.057 "claim_type": "read_many_write_one", 00:24:39.057 "zoned": false, 00:24:39.057 "supported_io_types": { 00:24:39.057 "read": true, 00:24:39.057 "write": true, 00:24:39.057 "unmap": true, 00:24:39.057 "flush": true, 00:24:39.057 "reset": true, 00:24:39.057 "nvme_admin": true, 00:24:39.057 "nvme_io": true, 00:24:39.057 "nvme_io_md": false, 00:24:39.057 "write_zeroes": true, 00:24:39.057 "zcopy": false, 00:24:39.057 "get_zone_info": false, 00:24:39.057 "zone_management": false, 00:24:39.057 "zone_append": false, 00:24:39.057 "compare": true, 00:24:39.057 "compare_and_write": false, 00:24:39.057 "abort": true, 00:24:39.057 "seek_hole": false, 00:24:39.057 "seek_data": false, 00:24:39.057 "copy": true, 00:24:39.057 "nvme_iov_md": false 00:24:39.057 }, 00:24:39.057 "driver_specific": { 00:24:39.057 "nvme": [ 00:24:39.057 { 00:24:39.057 "pci_address": "0000:00:11.0", 00:24:39.057 "trid": { 00:24:39.057 "trtype": "PCIe", 00:24:39.057 "traddr": "0000:00:11.0" 00:24:39.057 }, 00:24:39.057 "ctrlr_data": { 00:24:39.057 "cntlid": 0, 00:24:39.057 "vendor_id": "0x1b36", 00:24:39.057 "model_number": "QEMU NVMe Ctrl", 00:24:39.057 "serial_number": "12341", 00:24:39.057 "firmware_revision": "8.0.0", 00:24:39.057 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:39.057 "oacs": { 00:24:39.057 "security": 0, 00:24:39.057 "format": 1, 00:24:39.057 "firmware": 0, 00:24:39.057 "ns_manage": 1 00:24:39.057 }, 00:24:39.057 "multi_ctrlr": false, 00:24:39.057 "ana_reporting": false 00:24:39.057 }, 00:24:39.057 "vs": { 00:24:39.057 "nvme_version": "1.4" 00:24:39.057 }, 00:24:39.057 "ns_data": { 00:24:39.057 "id": 1, 00:24:39.057 "can_share": false 00:24:39.057 } 00:24:39.057 } 00:24:39.057 ], 00:24:39.057 "mp_policy": "active_passive" 00:24:39.057 } 00:24:39.057 } 00:24:39.057 ]' 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:39.057 12:55:38 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:39.316 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=3b75b8c6-c660-41bf-a0ca-4a6ce443403c 00:24:39.316 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:39.316 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b75b8c6-c660-41bf-a0ca-4a6ce443403c 00:24:39.573 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:39.830 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f41d6571-de1e-4cec-9611-8c6eb45e6213 00:24:39.830 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f41d6571-de1e-4cec-9611-8c6eb45e6213 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=30108093-61d6-4363-916f-b1c7845626e1 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 30108093-61d6-4363-916f-b1c7845626e1 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=30108093-61d6-4363-916f-b1c7845626e1 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 30108093-61d6-4363-916f-b1c7845626e1 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=30108093-61d6-4363-916f-b1c7845626e1 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30108093-61d6-4363-916f-b1c7845626e1 00:24:40.098 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:40.098 { 00:24:40.098 "name": "30108093-61d6-4363-916f-b1c7845626e1", 00:24:40.098 "aliases": [ 00:24:40.098 "lvs/nvme0n1p0" 00:24:40.098 ], 00:24:40.098 "product_name": "Logical Volume", 00:24:40.098 "block_size": 4096, 00:24:40.098 "num_blocks": 26476544, 00:24:40.098 "uuid": "30108093-61d6-4363-916f-b1c7845626e1", 00:24:40.098 "assigned_rate_limits": { 00:24:40.098 "rw_ios_per_sec": 0, 00:24:40.098 "rw_mbytes_per_sec": 0, 00:24:40.098 "r_mbytes_per_sec": 0, 00:24:40.098 "w_mbytes_per_sec": 0 00:24:40.098 }, 00:24:40.098 "claimed": false, 00:24:40.098 "zoned": false, 00:24:40.098 "supported_io_types": { 00:24:40.098 "read": true, 00:24:40.098 "write": true, 00:24:40.098 "unmap": true, 00:24:40.098 "flush": false, 00:24:40.098 "reset": true, 00:24:40.098 "nvme_admin": false, 00:24:40.098 "nvme_io": false, 00:24:40.098 "nvme_io_md": false, 00:24:40.098 "write_zeroes": true, 00:24:40.098 "zcopy": false, 00:24:40.098 "get_zone_info": false, 00:24:40.098 "zone_management": false, 00:24:40.098 "zone_append": false, 00:24:40.098 "compare": false, 00:24:40.098 "compare_and_write": false, 00:24:40.098 "abort": false, 00:24:40.098 "seek_hole": true, 00:24:40.098 "seek_data": true, 00:24:40.098 "copy": false, 00:24:40.098 "nvme_iov_md": false 00:24:40.098 }, 00:24:40.098 "driver_specific": { 00:24:40.098 "lvol": { 00:24:40.099 "lvol_store_uuid": "f41d6571-de1e-4cec-9611-8c6eb45e6213", 00:24:40.099 "base_bdev": "nvme0n1", 00:24:40.099 "thin_provision": true, 00:24:40.099 "num_allocated_clusters": 0, 00:24:40.099 "snapshot": false, 00:24:40.099 "clone": false, 00:24:40.099 "esnap_clone": false 00:24:40.099 } 00:24:40.099 } 00:24:40.099 } 00:24:40.099 ]' 00:24:40.099 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:40.099 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:40.099 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:40.358 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:40.358 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:40.358 12:55:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:40.358 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:40.358 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:40.358 12:55:39 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 30108093-61d6-4363-916f-b1c7845626e1 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=30108093-61d6-4363-916f-b1c7845626e1 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30108093-61d6-4363-916f-b1c7845626e1 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:40.614 { 00:24:40.614 "name": "30108093-61d6-4363-916f-b1c7845626e1", 00:24:40.614 "aliases": [ 00:24:40.614 "lvs/nvme0n1p0" 00:24:40.614 ], 00:24:40.614 "product_name": "Logical Volume", 00:24:40.614 "block_size": 4096, 00:24:40.614 "num_blocks": 26476544, 00:24:40.614 "uuid": "30108093-61d6-4363-916f-b1c7845626e1", 00:24:40.614 "assigned_rate_limits": { 00:24:40.614 "rw_ios_per_sec": 0, 00:24:40.614 "rw_mbytes_per_sec": 0, 00:24:40.614 "r_mbytes_per_sec": 0, 00:24:40.614 "w_mbytes_per_sec": 0 00:24:40.614 }, 00:24:40.614 "claimed": false, 00:24:40.614 "zoned": false, 00:24:40.614 "supported_io_types": { 00:24:40.614 "read": true, 00:24:40.614 "write": true, 00:24:40.614 "unmap": true, 00:24:40.614 "flush": false, 00:24:40.614 "reset": true, 00:24:40.614 "nvme_admin": false, 00:24:40.614 "nvme_io": false, 00:24:40.614 "nvme_io_md": false, 00:24:40.614 "write_zeroes": true, 00:24:40.614 "zcopy": false, 00:24:40.614 "get_zone_info": false, 00:24:40.614 "zone_management": false, 00:24:40.614 "zone_append": false, 00:24:40.614 "compare": false, 00:24:40.614 "compare_and_write": false, 00:24:40.614 "abort": false, 00:24:40.614 "seek_hole": true, 00:24:40.614 "seek_data": true, 00:24:40.614 "copy": false, 00:24:40.614 "nvme_iov_md": false 00:24:40.614 }, 00:24:40.614 "driver_specific": { 00:24:40.614 "lvol": { 00:24:40.614 "lvol_store_uuid": "f41d6571-de1e-4cec-9611-8c6eb45e6213", 00:24:40.614 "base_bdev": "nvme0n1", 00:24:40.614 "thin_provision": true, 00:24:40.614 "num_allocated_clusters": 0, 00:24:40.614 "snapshot": false, 00:24:40.614 "clone": false, 00:24:40.614 "esnap_clone": false 00:24:40.614 } 00:24:40.614 } 00:24:40.614 } 00:24:40.614 ]' 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:40.614 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 30108093-61d6-4363-916f-b1c7845626e1 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=30108093-61d6-4363-916f-b1c7845626e1 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:40.872 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30108093-61d6-4363-916f-b1c7845626e1 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:41.129 { 00:24:41.129 "name": "30108093-61d6-4363-916f-b1c7845626e1", 00:24:41.129 "aliases": [ 00:24:41.129 "lvs/nvme0n1p0" 00:24:41.129 ], 00:24:41.129 "product_name": "Logical Volume", 00:24:41.129 "block_size": 4096, 00:24:41.129 "num_blocks": 26476544, 00:24:41.129 "uuid": "30108093-61d6-4363-916f-b1c7845626e1", 00:24:41.129 "assigned_rate_limits": { 00:24:41.129 "rw_ios_per_sec": 0, 00:24:41.129 "rw_mbytes_per_sec": 0, 00:24:41.129 "r_mbytes_per_sec": 0, 00:24:41.129 "w_mbytes_per_sec": 0 00:24:41.129 }, 00:24:41.129 "claimed": false, 00:24:41.129 "zoned": false, 00:24:41.129 "supported_io_types": { 00:24:41.129 "read": true, 00:24:41.129 "write": true, 00:24:41.129 "unmap": true, 00:24:41.129 "flush": false, 00:24:41.129 "reset": true, 00:24:41.129 "nvme_admin": false, 00:24:41.129 "nvme_io": false, 00:24:41.129 "nvme_io_md": false, 00:24:41.129 "write_zeroes": true, 00:24:41.129 "zcopy": false, 00:24:41.129 "get_zone_info": false, 00:24:41.129 "zone_management": false, 00:24:41.129 "zone_append": false, 00:24:41.129 "compare": false, 00:24:41.129 "compare_and_write": false, 00:24:41.129 "abort": false, 00:24:41.129 "seek_hole": true, 00:24:41.129 "seek_data": true, 00:24:41.129 "copy": false, 00:24:41.129 "nvme_iov_md": false 00:24:41.129 }, 00:24:41.129 "driver_specific": { 00:24:41.129 "lvol": { 00:24:41.129 "lvol_store_uuid": "f41d6571-de1e-4cec-9611-8c6eb45e6213", 00:24:41.129 "base_bdev": "nvme0n1", 00:24:41.129 "thin_provision": true, 00:24:41.129 "num_allocated_clusters": 0, 00:24:41.129 "snapshot": false, 00:24:41.129 "clone": false, 00:24:41.129 "esnap_clone": false 00:24:41.129 } 00:24:41.129 } 00:24:41.129 } 00:24:41.129 ]' 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:41.129 12:55:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 30108093-61d6-4363-916f-b1c7845626e1 -c nvc0n1p0 --l2p_dram_limit 20 00:24:41.388 [2024-12-05 12:55:41.127617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.127883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:41.388 [2024-12-05 12:55:41.127917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:41.388 [2024-12-05 12:55:41.127926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.128007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.128018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:41.388 [2024-12-05 12:55:41.128033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:41.388 [2024-12-05 12:55:41.128045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.128067] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:41.388 [2024-12-05 12:55:41.128356] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:41.388 [2024-12-05 12:55:41.128373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.128384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:41.388 [2024-12-05 12:55:41.128398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:24:41.388 [2024-12-05 12:55:41.128406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.128474] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3e7b6dfc-300e-4e5c-a98b-c94bf5e08ef9 00:24:41.388 [2024-12-05 12:55:41.129835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.129860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:41.388 [2024-12-05 12:55:41.129875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:41.388 [2024-12-05 12:55:41.129887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.136745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.136785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:41.388 [2024-12-05 12:55:41.136796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.813 ms 00:24:41.388 [2024-12-05 12:55:41.136838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.136915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.136926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:41.388 [2024-12-05 12:55:41.136940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:41.388 [2024-12-05 12:55:41.136950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.137002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.137014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:41.388 [2024-12-05 12:55:41.137022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:41.388 [2024-12-05 12:55:41.137031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.137054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:41.388 [2024-12-05 12:55:41.138864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.138983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:41.388 [2024-12-05 12:55:41.139006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:24:41.388 [2024-12-05 12:55:41.139014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.139052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.139060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:41.388 [2024-12-05 12:55:41.139072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:41.388 [2024-12-05 12:55:41.139080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.139099] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:41.388 [2024-12-05 12:55:41.139261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:41.388 [2024-12-05 12:55:41.139275] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:41.388 [2024-12-05 12:55:41.139286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:41.388 [2024-12-05 12:55:41.139297] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139306] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:41.388 [2024-12-05 12:55:41.139323] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:41.388 [2024-12-05 12:55:41.139335] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:41.388 [2024-12-05 12:55:41.139344] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:41.388 [2024-12-05 12:55:41.139353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.139363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:41.388 [2024-12-05 12:55:41.139373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:24:41.388 [2024-12-05 12:55:41.139380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.139467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.388 [2024-12-05 12:55:41.139475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:41.388 [2024-12-05 12:55:41.139484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:41.388 [2024-12-05 12:55:41.139491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.388 [2024-12-05 12:55:41.139586] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:41.388 [2024-12-05 12:55:41.139601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:41.388 [2024-12-05 12:55:41.139612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:41.388 [2024-12-05 12:55:41.139641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:41.388 [2024-12-05 12:55:41.139668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:41.388 [2024-12-05 12:55:41.139685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:41.388 [2024-12-05 12:55:41.139692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:41.388 [2024-12-05 12:55:41.139704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:41.388 [2024-12-05 12:55:41.139712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:41.388 [2024-12-05 12:55:41.139721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:41.388 [2024-12-05 12:55:41.139729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:41.388 [2024-12-05 12:55:41.139753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:41.388 [2024-12-05 12:55:41.139780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:41.388 [2024-12-05 12:55:41.139801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:41.388 [2024-12-05 12:55:41.139842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:41.388 [2024-12-05 12:55:41.139866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:41.388 [2024-12-05 12:55:41.139882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:41.388 [2024-12-05 12:55:41.139891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:41.388 [2024-12-05 12:55:41.139906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:41.388 [2024-12-05 12:55:41.139913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:41.388 [2024-12-05 12:55:41.139921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:41.388 [2024-12-05 12:55:41.139927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:41.388 [2024-12-05 12:55:41.139936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:41.388 [2024-12-05 12:55:41.139942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:41.388 [2024-12-05 12:55:41.139957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:41.388 [2024-12-05 12:55:41.139965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:41.388 [2024-12-05 12:55:41.139971] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:41.388 [2024-12-05 12:55:41.139985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:41.388 [2024-12-05 12:55:41.140001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:41.388 [2024-12-05 12:55:41.140010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:41.388 [2024-12-05 12:55:41.140017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:41.388 [2024-12-05 12:55:41.140026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:41.388 [2024-12-05 12:55:41.140034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:41.388 [2024-12-05 12:55:41.140044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:41.388 [2024-12-05 12:55:41.140051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:41.388 [2024-12-05 12:55:41.140060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:41.388 [2024-12-05 12:55:41.140068] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:41.388 [2024-12-05 12:55:41.140079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:41.389 [2024-12-05 12:55:41.140087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:41.389 [2024-12-05 12:55:41.140096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:41.389 [2024-12-05 12:55:41.140105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:41.389 [2024-12-05 12:55:41.140119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:41.389 [2024-12-05 12:55:41.140131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:41.389 [2024-12-05 12:55:41.140142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:41.389 [2024-12-05 12:55:41.140149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:41.389 [2024-12-05 12:55:41.140157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:41.389 [2024-12-05 12:55:41.140164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:41.389 [2024-12-05 12:55:41.140173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:41.389 [2024-12-05 12:55:41.140180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:41.389 [2024-12-05 12:55:41.140195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:41.389 [2024-12-05 12:55:41.140203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:41.389 [2024-12-05 12:55:41.140213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:41.389 [2024-12-05 12:55:41.140220] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:41.389 [2024-12-05 12:55:41.140232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:41.389 [2024-12-05 12:55:41.140240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:41.389 [2024-12-05 12:55:41.140249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:41.389 [2024-12-05 12:55:41.140256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:41.389 [2024-12-05 12:55:41.140265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:41.389 [2024-12-05 12:55:41.140272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.389 [2024-12-05 12:55:41.140283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:41.389 [2024-12-05 12:55:41.140291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:24:41.389 [2024-12-05 12:55:41.140300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.389 [2024-12-05 12:55:41.140346] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:41.389 [2024-12-05 12:55:41.140363] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:43.919 [2024-12-05 12:55:43.272221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.272487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:43.919 [2024-12-05 12:55:43.272564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2131.862 ms 00:24:43.919 [2024-12-05 12:55:43.272594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.283920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.284145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:43.919 [2024-12-05 12:55:43.284217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.187 ms 00:24:43.919 [2024-12-05 12:55:43.284247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.284374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.284409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:43.919 [2024-12-05 12:55:43.284466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:43.919 [2024-12-05 12:55:43.284491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.302066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.302351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:43.919 [2024-12-05 12:55:43.302492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.483 ms 00:24:43.919 [2024-12-05 12:55:43.302532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.302619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.302714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:43.919 [2024-12-05 12:55:43.302750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:43.919 [2024-12-05 12:55:43.302779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.303472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.303624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:43.919 [2024-12-05 12:55:43.303705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:24:43.919 [2024-12-05 12:55:43.303743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.304120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.304212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:43.919 [2024-12-05 12:55:43.304290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:24:43.919 [2024-12-05 12:55:43.304331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.311627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.311799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:43.919 [2024-12-05 12:55:43.311889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.254 ms 00:24:43.919 [2024-12-05 12:55:43.311924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.321383] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:43.919 [2024-12-05 12:55:43.327622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.327789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:43.919 [2024-12-05 12:55:43.327865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.536 ms 00:24:43.919 [2024-12-05 12:55:43.327890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.375889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.376169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:43.919 [2024-12-05 12:55:43.376302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.927 ms 00:24:43.919 [2024-12-05 12:55:43.376328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.376616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.376643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:43.919 [2024-12-05 12:55:43.376662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:24:43.919 [2024-12-05 12:55:43.376674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.919 [2024-12-05 12:55:43.380803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.919 [2024-12-05 12:55:43.380882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:43.919 [2024-12-05 12:55:43.380903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.063 ms 00:24:43.920 [2024-12-05 12:55:43.380916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.384170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.920 [2024-12-05 12:55:43.384221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:43.920 [2024-12-05 12:55:43.384256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:24:43.920 [2024-12-05 12:55:43.384268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.384734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.920 [2024-12-05 12:55:43.384764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:43.920 [2024-12-05 12:55:43.384784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:24:43.920 [2024-12-05 12:55:43.384796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.410377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.920 [2024-12-05 12:55:43.410447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:43.920 [2024-12-05 12:55:43.410465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.513 ms 00:24:43.920 [2024-12-05 12:55:43.410473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.415184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.920 [2024-12-05 12:55:43.415364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:43.920 [2024-12-05 12:55:43.415386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:24:43.920 [2024-12-05 12:55:43.415395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.418984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.920 [2024-12-05 12:55:43.419032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:43.920 [2024-12-05 12:55:43.419044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.542 ms 00:24:43.920 [2024-12-05 12:55:43.419052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.422383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.920 [2024-12-05 12:55:43.422531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:43.920 [2024-12-05 12:55:43.422554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:24:43.920 [2024-12-05 12:55:43.422562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.422602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.920 [2024-12-05 12:55:43.422614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:43.920 [2024-12-05 12:55:43.422626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:43.920 [2024-12-05 12:55:43.422634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.422706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.920 [2024-12-05 12:55:43.422716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:43.920 [2024-12-05 12:55:43.422731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:43.920 [2024-12-05 12:55:43.422739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.920 [2024-12-05 12:55:43.423787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2295.710 ms, result 0 00:24:43.920 { 00:24:43.920 "name": "ftl0", 00:24:43.920 "uuid": "3e7b6dfc-300e-4e5c-a98b-c94bf5e08ef9" 00:24:43.920 } 00:24:43.920 12:55:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:43.920 12:55:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:43.920 12:55:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:43.920 12:55:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:44.177 [2024-12-05 12:55:43.812776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:44.177 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:44.177 Zero copy mechanism will not be used. 00:24:44.177 Running I/O for 4 seconds... 00:24:46.040 3099.00 IOPS, 205.79 MiB/s [2024-12-05T12:55:46.870Z] 3100.50 IOPS, 205.89 MiB/s [2024-12-05T12:55:48.244Z] 2960.33 IOPS, 196.58 MiB/s 00:24:48.392 Latency(us) 00:24:48.392 [2024-12-05T12:55:48.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.392 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:48.392 ftl0 : 4.00 2989.13 198.50 0.00 0.00 351.10 159.11 129862.10 00:24:48.392 [2024-12-05T12:55:48.244Z] =================================================================================================================== 00:24:48.392 [2024-12-05T12:55:48.244Z] Total : 2989.13 198.50 0.00 0.00 351.10 159.11 129862.10 00:24:48.392 [2024-12-05 12:55:47.819216] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:48.392 { 00:24:48.392 "results": [ 00:24:48.392 { 00:24:48.392 "job": "ftl0", 00:24:48.392 "core_mask": "0x1", 00:24:48.392 "workload": "randwrite", 00:24:48.392 "status": "finished", 00:24:48.392 "queue_depth": 1, 00:24:48.392 "io_size": 69632, 00:24:48.392 "runtime": 4.000154, 00:24:48.392 "iops": 2989.1349183056454, 00:24:48.392 "mibps": 198.49724066873426, 00:24:48.392 "io_failed": 0, 00:24:48.392 "io_timeout": 0, 00:24:48.392 "avg_latency_us": 351.0951205923791, 00:24:48.392 "min_latency_us": 159.11384615384614, 00:24:48.392 "max_latency_us": 129862.10461538461 00:24:48.392 } 00:24:48.392 ], 00:24:48.392 "core_count": 1 00:24:48.392 } 00:24:48.392 12:55:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:48.392 [2024-12-05 12:55:47.932398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:48.392 Running I/O for 4 seconds... 00:24:50.380 10071.00 IOPS, 39.34 MiB/s [2024-12-05T12:55:51.163Z] 8951.00 IOPS, 34.96 MiB/s [2024-12-05T12:55:52.094Z] 8831.67 IOPS, 34.50 MiB/s [2024-12-05T12:55:52.094Z] 8591.50 IOPS, 33.56 MiB/s 00:24:52.242 Latency(us) 00:24:52.242 [2024-12-05T12:55:52.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.242 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:24:52.242 ftl0 : 4.02 8586.00 33.54 0.00 0.00 14875.64 244.18 186323.89 00:24:52.242 [2024-12-05T12:55:52.094Z] =================================================================================================================== 00:24:52.242 [2024-12-05T12:55:52.094Z] Total : 8586.00 33.54 0.00 0.00 14875.64 0.00 186323.89 00:24:52.242 { 00:24:52.242 "results": [ 00:24:52.242 { 00:24:52.242 "job": "ftl0", 00:24:52.242 "core_mask": "0x1", 00:24:52.242 "workload": "randwrite", 00:24:52.242 "status": "finished", 00:24:52.242 "queue_depth": 128, 00:24:52.242 "io_size": 4096, 00:24:52.242 "runtime": 4.017237, 00:24:52.242 "iops": 8586.000776155353, 00:24:52.242 "mibps": 33.53906553185685, 00:24:52.242 "io_failed": 0, 00:24:52.242 "io_timeout": 0, 00:24:52.242 "avg_latency_us": 14875.638381073873, 00:24:52.242 "min_latency_us": 244.1846153846154, 00:24:52.242 "max_latency_us": 186323.88923076924 00:24:52.242 } 00:24:52.242 ], 00:24:52.242 "core_count": 1 00:24:52.242 } 00:24:52.242 [2024-12-05 12:55:51.956303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:52.242 12:55:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:24:52.499 [2024-12-05 12:55:52.098419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:52.499 Running I/O for 4 seconds... 00:24:54.365 8427.00 IOPS, 32.92 MiB/s [2024-12-05T12:55:55.150Z] 8540.00 IOPS, 33.36 MiB/s [2024-12-05T12:55:56.523Z] 8247.33 IOPS, 32.22 MiB/s [2024-12-05T12:55:56.523Z] 8592.00 IOPS, 33.56 MiB/s 00:24:56.671 Latency(us) 00:24:56.671 [2024-12-05T12:55:56.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.671 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:56.671 Verification LBA range: start 0x0 length 0x1400000 00:24:56.671 ftl0 : 4.01 8605.06 33.61 0.00 0.00 14829.76 220.55 97598.23 00:24:56.671 [2024-12-05T12:55:56.523Z] =================================================================================================================== 00:24:56.671 [2024-12-05T12:55:56.523Z] Total : 8605.06 33.61 0.00 0.00 14829.76 0.00 97598.23 00:24:56.671 [2024-12-05 12:55:56.113963] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:56.671 { 00:24:56.671 "results": [ 00:24:56.671 { 00:24:56.671 "job": "ftl0", 00:24:56.671 "core_mask": "0x1", 00:24:56.671 "workload": "verify", 00:24:56.671 "status": "finished", 00:24:56.671 "verify_range": { 00:24:56.671 "start": 0, 00:24:56.671 "length": 20971520 00:24:56.671 }, 00:24:56.671 "queue_depth": 128, 00:24:56.671 "io_size": 4096, 00:24:56.671 "runtime": 4.008573, 00:24:56.671 "iops": 8605.057211132242, 00:24:56.671 "mibps": 33.61350473098532, 00:24:56.671 "io_failed": 0, 00:24:56.671 "io_timeout": 0, 00:24:56.671 "avg_latency_us": 14829.762162248955, 00:24:56.671 "min_latency_us": 220.55384615384617, 00:24:56.671 "max_latency_us": 97598.22769230769 00:24:56.671 } 00:24:56.671 ], 00:24:56.671 "core_count": 1 00:24:56.671 } 00:24:56.671 12:55:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:24:56.671 [2024-12-05 12:55:56.342645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.342773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:56.671 [2024-12-05 12:55:56.342862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:56.671 [2024-12-05 12:55:56.342890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.342968] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:56.671 [2024-12-05 12:55:56.343857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.343931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:56.671 [2024-12-05 12:55:56.343963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:24:56.671 [2024-12-05 12:55:56.343994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.346432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.346521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:56.671 [2024-12-05 12:55:56.346552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:24:56.671 [2024-12-05 12:55:56.346585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.493876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.493967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:56.671 [2024-12-05 12:55:56.493987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 147.244 ms 00:24:56.671 [2024-12-05 12:55:56.493998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.500912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.501082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:56.671 [2024-12-05 12:55:56.501102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.876 ms 00:24:56.671 [2024-12-05 12:55:56.501113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.502872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.502923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:56.671 [2024-12-05 12:55:56.502936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:24:56.671 [2024-12-05 12:55:56.502950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.507059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.507098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:56.671 [2024-12-05 12:55:56.507109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:24:56.671 [2024-12-05 12:55:56.507122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.507236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.507248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:56.671 [2024-12-05 12:55:56.507257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:56.671 [2024-12-05 12:55:56.507267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.508769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.508922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:56.671 [2024-12-05 12:55:56.508938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:24:56.671 [2024-12-05 12:55:56.508947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.510107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.510139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:56.671 [2024-12-05 12:55:56.510147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:24:56.671 [2024-12-05 12:55:56.510156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.511051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.511088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:56.671 [2024-12-05 12:55:56.511097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:24:56.671 [2024-12-05 12:55:56.511109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.512173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.671 [2024-12-05 12:55:56.512293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:56.671 [2024-12-05 12:55:56.512307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:24:56.671 [2024-12-05 12:55:56.512316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.671 [2024-12-05 12:55:56.512342] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:56.671 [2024-12-05 12:55:56.512370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:56.671 [2024-12-05 12:55:56.512513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.512999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:56.672 [2024-12-05 12:55:56.513279] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:56.672 [2024-12-05 12:55:56.513287] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e7b6dfc-300e-4e5c-a98b-c94bf5e08ef9 00:24:56.672 [2024-12-05 12:55:56.513297] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:56.672 [2024-12-05 12:55:56.513305] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:56.672 [2024-12-05 12:55:56.513314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:56.672 [2024-12-05 12:55:56.513322] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:56.672 [2024-12-05 12:55:56.513333] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:56.672 [2024-12-05 12:55:56.513340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:56.672 [2024-12-05 12:55:56.513367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:56.673 [2024-12-05 12:55:56.513373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:56.673 [2024-12-05 12:55:56.513382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:56.673 [2024-12-05 12:55:56.513388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.673 [2024-12-05 12:55:56.513401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:56.673 [2024-12-05 12:55:56.513411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:24:56.673 [2024-12-05 12:55:56.513419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.673 [2024-12-05 12:55:56.515211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.673 [2024-12-05 12:55:56.515236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:56.673 [2024-12-05 12:55:56.515245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:24:56.673 [2024-12-05 12:55:56.515255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.673 [2024-12-05 12:55:56.515359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.673 [2024-12-05 12:55:56.515373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:56.673 [2024-12-05 12:55:56.515381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:56.673 [2024-12-05 12:55:56.515393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.930 [2024-12-05 12:55:56.521648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.930 [2024-12-05 12:55:56.521766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.930 [2024-12-05 12:55:56.521832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.930 [2024-12-05 12:55:56.521859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.930 [2024-12-05 12:55:56.522005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.930 [2024-12-05 12:55:56.522035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.930 [2024-12-05 12:55:56.522087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.930 [2024-12-05 12:55:56.522111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.930 [2024-12-05 12:55:56.522191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.930 [2024-12-05 12:55:56.522244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.930 [2024-12-05 12:55:56.522264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.930 [2024-12-05 12:55:56.522317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.930 [2024-12-05 12:55:56.522349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.930 [2024-12-05 12:55:56.522371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.930 [2024-12-05 12:55:56.522509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.930 [2024-12-05 12:55:56.522546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.930 [2024-12-05 12:55:56.533881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.930 [2024-12-05 12:55:56.534024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.931 [2024-12-05 12:55:56.534077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.931 [2024-12-05 12:55:56.534102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.931 [2024-12-05 12:55:56.543620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.931 [2024-12-05 12:55:56.543796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.931 [2024-12-05 12:55:56.543864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.931 [2024-12-05 12:55:56.543897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.931 [2024-12-05 12:55:56.544011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.931 [2024-12-05 12:55:56.544074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:56.931 [2024-12-05 12:55:56.544121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.931 [2024-12-05 12:55:56.544214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.931 [2024-12-05 12:55:56.544279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.931 [2024-12-05 12:55:56.544337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:56.931 [2024-12-05 12:55:56.544362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.931 [2024-12-05 12:55:56.544471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.931 [2024-12-05 12:55:56.544563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.931 [2024-12-05 12:55:56.544693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:56.931 [2024-12-05 12:55:56.544748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.931 [2024-12-05 12:55:56.544881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.931 [2024-12-05 12:55:56.544946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.931 [2024-12-05 12:55:56.545015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:56.931 [2024-12-05 12:55:56.545039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.931 [2024-12-05 12:55:56.545085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.931 [2024-12-05 12:55:56.545145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.931 [2024-12-05 12:55:56.545277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:56.931 [2024-12-05 12:55:56.545304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.931 [2024-12-05 12:55:56.545325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.931 [2024-12-05 12:55:56.545389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.931 [2024-12-05 12:55:56.545421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:56.931 [2024-12-05 12:55:56.545449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.931 [2024-12-05 12:55:56.545477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.931 [2024-12-05 12:55:56.545662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 203.038 ms, result 0 00:24:56.931 true 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 87394 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 87394 ']' 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 87394 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87394 00:24:56.931 killing process with pid 87394 00:24:56.931 Received shutdown signal, test time was about 4.000000 seconds 00:24:56.931 00:24:56.931 Latency(us) 00:24:56.931 [2024-12-05T12:55:56.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.931 [2024-12-05T12:55:56.783Z] =================================================================================================================== 00:24:56.931 [2024-12-05T12:55:56.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87394' 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 87394 00:24:56.931 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 87394 00:24:57.188 Remove shared memory files 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:24:57.188 ************************************ 00:24:57.188 END TEST ftl_bdevperf 00:24:57.188 ************************************ 00:24:57.188 00:24:57.188 real 0m19.810s 00:24:57.188 user 0m22.713s 00:24:57.188 sys 0m0.860s 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.188 12:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:57.188 12:55:56 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:57.188 12:55:56 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:57.188 12:55:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.189 12:55:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:57.189 ************************************ 00:24:57.189 START TEST ftl_trim 00:24:57.189 ************************************ 00:24:57.189 12:55:56 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:57.447 * Looking for test storage... 00:24:57.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.447 12:55:57 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:57.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.447 --rc genhtml_branch_coverage=1 00:24:57.447 --rc genhtml_function_coverage=1 00:24:57.447 --rc genhtml_legend=1 00:24:57.447 --rc geninfo_all_blocks=1 00:24:57.447 --rc geninfo_unexecuted_blocks=1 00:24:57.447 00:24:57.447 ' 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:57.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.447 --rc genhtml_branch_coverage=1 00:24:57.447 --rc genhtml_function_coverage=1 00:24:57.447 --rc genhtml_legend=1 00:24:57.447 --rc geninfo_all_blocks=1 00:24:57.447 --rc geninfo_unexecuted_blocks=1 00:24:57.447 00:24:57.447 ' 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:57.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.447 --rc genhtml_branch_coverage=1 00:24:57.447 --rc genhtml_function_coverage=1 00:24:57.447 --rc genhtml_legend=1 00:24:57.447 --rc geninfo_all_blocks=1 00:24:57.447 --rc geninfo_unexecuted_blocks=1 00:24:57.447 00:24:57.447 ' 00:24:57.447 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:57.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.447 --rc genhtml_branch_coverage=1 00:24:57.447 --rc genhtml_function_coverage=1 00:24:57.448 --rc genhtml_legend=1 00:24:57.448 --rc geninfo_all_blocks=1 00:24:57.448 --rc geninfo_unexecuted_blocks=1 00:24:57.448 00:24:57.448 ' 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87723 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87723 00:24:57.448 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87723 ']' 00:24:57.448 12:55:57 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:24:57.448 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.448 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.448 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.448 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.448 12:55:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:57.448 [2024-12-05 12:55:57.234166] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:24:57.448 [2024-12-05 12:55:57.234466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87723 ] 00:24:57.707 [2024-12-05 12:55:57.387521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:57.707 [2024-12-05 12:55:57.414724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.707 [2024-12-05 12:55:57.414984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.707 [2024-12-05 12:55:57.415010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.732 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.732 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:24:58.732 12:55:58 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:58.732 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:58.732 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:58.732 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:58.732 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:58.732 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:58.990 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:58.990 { 00:24:58.990 "name": "nvme0n1", 00:24:58.990 "aliases": [ 00:24:58.990 "f2fb4c54-0150-4142-949a-0e621f06dfee" 00:24:58.990 ], 00:24:58.990 "product_name": "NVMe disk", 00:24:58.990 "block_size": 4096, 00:24:58.990 "num_blocks": 1310720, 00:24:58.990 "uuid": "f2fb4c54-0150-4142-949a-0e621f06dfee", 00:24:58.990 "numa_id": -1, 00:24:58.990 "assigned_rate_limits": { 00:24:58.990 "rw_ios_per_sec": 0, 00:24:58.990 "rw_mbytes_per_sec": 0, 00:24:58.990 "r_mbytes_per_sec": 0, 00:24:58.990 "w_mbytes_per_sec": 0 00:24:58.990 }, 00:24:58.990 "claimed": true, 00:24:58.990 "claim_type": "read_many_write_one", 00:24:58.990 "zoned": false, 00:24:58.990 "supported_io_types": { 00:24:58.990 "read": true, 00:24:58.990 "write": true, 00:24:58.990 "unmap": true, 00:24:58.990 "flush": true, 00:24:58.990 "reset": true, 00:24:58.990 "nvme_admin": true, 00:24:58.990 "nvme_io": true, 00:24:58.990 "nvme_io_md": false, 00:24:58.990 "write_zeroes": true, 00:24:58.990 "zcopy": false, 00:24:58.990 "get_zone_info": false, 00:24:58.990 "zone_management": false, 00:24:58.990 "zone_append": false, 00:24:58.990 "compare": true, 00:24:58.990 "compare_and_write": false, 00:24:58.990 "abort": true, 00:24:58.990 "seek_hole": false, 00:24:58.990 "seek_data": false, 00:24:58.990 "copy": true, 00:24:58.990 "nvme_iov_md": false 00:24:58.990 }, 00:24:58.990 "driver_specific": { 00:24:58.990 "nvme": [ 00:24:58.990 { 00:24:58.990 "pci_address": "0000:00:11.0", 00:24:58.990 "trid": { 00:24:58.990 "trtype": "PCIe", 00:24:58.990 "traddr": "0000:00:11.0" 00:24:58.990 }, 00:24:58.990 "ctrlr_data": { 00:24:58.990 "cntlid": 0, 00:24:58.990 "vendor_id": "0x1b36", 00:24:58.990 "model_number": "QEMU NVMe Ctrl", 00:24:58.990 "serial_number": "12341", 00:24:58.990 "firmware_revision": "8.0.0", 00:24:58.990 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:58.990 "oacs": { 00:24:58.990 "security": 0, 00:24:58.990 "format": 1, 00:24:58.990 "firmware": 0, 00:24:58.990 "ns_manage": 1 00:24:58.990 }, 00:24:58.990 "multi_ctrlr": false, 00:24:58.990 "ana_reporting": false 00:24:58.990 }, 00:24:58.990 "vs": { 00:24:58.990 "nvme_version": "1.4" 00:24:58.990 }, 00:24:58.990 "ns_data": { 00:24:58.991 "id": 1, 00:24:58.991 "can_share": false 00:24:58.991 } 00:24:58.991 } 00:24:58.991 ], 00:24:58.991 "mp_policy": "active_passive" 00:24:58.991 } 00:24:58.991 } 00:24:58.991 ]' 00:24:58.991 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:58.991 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:58.991 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:58.991 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:58.991 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:58.991 12:55:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:24:58.991 12:55:58 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:24:58.991 12:55:58 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:58.991 12:55:58 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:24:58.991 12:55:58 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:58.991 12:55:58 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:59.248 12:55:58 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f41d6571-de1e-4cec-9611-8c6eb45e6213 00:24:59.248 12:55:58 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:24:59.248 12:55:58 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f41d6571-de1e-4cec-9611-8c6eb45e6213 00:24:59.505 12:55:59 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=adae6a3d-7946-4efe-b75e-4c8178582b7c 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u adae6a3d-7946-4efe-b75e-4c8178582b7c 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:24:59.763 12:55:59 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:24:59.763 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:24:59.763 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:59.763 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:59.763 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:59.763 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:25:00.022 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:00.022 { 00:25:00.022 "name": "7a2481c7-45e5-4f8d-a84f-58887b18f7f4", 00:25:00.022 "aliases": [ 00:25:00.022 "lvs/nvme0n1p0" 00:25:00.022 ], 00:25:00.022 "product_name": "Logical Volume", 00:25:00.022 "block_size": 4096, 00:25:00.022 "num_blocks": 26476544, 00:25:00.022 "uuid": "7a2481c7-45e5-4f8d-a84f-58887b18f7f4", 00:25:00.022 "assigned_rate_limits": { 00:25:00.022 "rw_ios_per_sec": 0, 00:25:00.022 "rw_mbytes_per_sec": 0, 00:25:00.022 "r_mbytes_per_sec": 0, 00:25:00.022 "w_mbytes_per_sec": 0 00:25:00.022 }, 00:25:00.022 "claimed": false, 00:25:00.022 "zoned": false, 00:25:00.022 "supported_io_types": { 00:25:00.022 "read": true, 00:25:00.022 "write": true, 00:25:00.022 "unmap": true, 00:25:00.022 "flush": false, 00:25:00.022 "reset": true, 00:25:00.022 "nvme_admin": false, 00:25:00.022 "nvme_io": false, 00:25:00.022 "nvme_io_md": false, 00:25:00.022 "write_zeroes": true, 00:25:00.022 "zcopy": false, 00:25:00.022 "get_zone_info": false, 00:25:00.022 "zone_management": false, 00:25:00.022 "zone_append": false, 00:25:00.022 "compare": false, 00:25:00.022 "compare_and_write": false, 00:25:00.022 "abort": false, 00:25:00.022 "seek_hole": true, 00:25:00.022 "seek_data": true, 00:25:00.022 "copy": false, 00:25:00.022 "nvme_iov_md": false 00:25:00.022 }, 00:25:00.022 "driver_specific": { 00:25:00.022 "lvol": { 00:25:00.022 "lvol_store_uuid": "adae6a3d-7946-4efe-b75e-4c8178582b7c", 00:25:00.022 "base_bdev": "nvme0n1", 00:25:00.022 "thin_provision": true, 00:25:00.022 "num_allocated_clusters": 0, 00:25:00.022 "snapshot": false, 00:25:00.022 "clone": false, 00:25:00.022 "esnap_clone": false 00:25:00.022 } 00:25:00.022 } 00:25:00.022 } 00:25:00.022 ]' 00:25:00.022 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:00.022 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:00.022 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:00.022 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:00.022 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:00.022 12:55:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:00.022 12:55:59 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:00.022 12:55:59 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:00.022 12:55:59 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:00.279 12:56:00 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:00.279 12:56:00 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:00.279 12:56:00 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:25:00.279 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:25:00.279 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:00.279 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:00.279 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:00.279 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:25:00.537 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:00.537 { 00:25:00.537 "name": "7a2481c7-45e5-4f8d-a84f-58887b18f7f4", 00:25:00.537 "aliases": [ 00:25:00.537 "lvs/nvme0n1p0" 00:25:00.537 ], 00:25:00.537 "product_name": "Logical Volume", 00:25:00.537 "block_size": 4096, 00:25:00.537 "num_blocks": 26476544, 00:25:00.537 "uuid": "7a2481c7-45e5-4f8d-a84f-58887b18f7f4", 00:25:00.537 "assigned_rate_limits": { 00:25:00.537 "rw_ios_per_sec": 0, 00:25:00.537 "rw_mbytes_per_sec": 0, 00:25:00.537 "r_mbytes_per_sec": 0, 00:25:00.537 "w_mbytes_per_sec": 0 00:25:00.537 }, 00:25:00.537 "claimed": false, 00:25:00.537 "zoned": false, 00:25:00.537 "supported_io_types": { 00:25:00.537 "read": true, 00:25:00.537 "write": true, 00:25:00.537 "unmap": true, 00:25:00.537 "flush": false, 00:25:00.537 "reset": true, 00:25:00.537 "nvme_admin": false, 00:25:00.537 "nvme_io": false, 00:25:00.537 "nvme_io_md": false, 00:25:00.537 "write_zeroes": true, 00:25:00.537 "zcopy": false, 00:25:00.537 "get_zone_info": false, 00:25:00.537 "zone_management": false, 00:25:00.537 "zone_append": false, 00:25:00.537 "compare": false, 00:25:00.537 "compare_and_write": false, 00:25:00.537 "abort": false, 00:25:00.537 "seek_hole": true, 00:25:00.537 "seek_data": true, 00:25:00.537 "copy": false, 00:25:00.537 "nvme_iov_md": false 00:25:00.537 }, 00:25:00.537 "driver_specific": { 00:25:00.537 "lvol": { 00:25:00.537 "lvol_store_uuid": "adae6a3d-7946-4efe-b75e-4c8178582b7c", 00:25:00.537 "base_bdev": "nvme0n1", 00:25:00.537 "thin_provision": true, 00:25:00.537 "num_allocated_clusters": 0, 00:25:00.537 "snapshot": false, 00:25:00.537 "clone": false, 00:25:00.537 "esnap_clone": false 00:25:00.537 } 00:25:00.537 } 00:25:00.537 } 00:25:00.537 ]' 00:25:00.537 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:00.537 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:00.537 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:00.794 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:00.794 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:00.794 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:00.794 12:56:00 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:00.794 12:56:00 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:00.794 12:56:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:00.794 12:56:00 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:00.794 12:56:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:25:00.794 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:25:00.794 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:00.794 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:00.794 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:00.794 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7a2481c7-45e5-4f8d-a84f-58887b18f7f4 00:25:01.050 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:01.050 { 00:25:01.050 "name": "7a2481c7-45e5-4f8d-a84f-58887b18f7f4", 00:25:01.050 "aliases": [ 00:25:01.050 "lvs/nvme0n1p0" 00:25:01.050 ], 00:25:01.050 "product_name": "Logical Volume", 00:25:01.050 "block_size": 4096, 00:25:01.050 "num_blocks": 26476544, 00:25:01.050 "uuid": "7a2481c7-45e5-4f8d-a84f-58887b18f7f4", 00:25:01.050 "assigned_rate_limits": { 00:25:01.050 "rw_ios_per_sec": 0, 00:25:01.050 "rw_mbytes_per_sec": 0, 00:25:01.050 "r_mbytes_per_sec": 0, 00:25:01.050 "w_mbytes_per_sec": 0 00:25:01.050 }, 00:25:01.050 "claimed": false, 00:25:01.050 "zoned": false, 00:25:01.050 "supported_io_types": { 00:25:01.050 "read": true, 00:25:01.050 "write": true, 00:25:01.050 "unmap": true, 00:25:01.050 "flush": false, 00:25:01.050 "reset": true, 00:25:01.050 "nvme_admin": false, 00:25:01.050 "nvme_io": false, 00:25:01.050 "nvme_io_md": false, 00:25:01.050 "write_zeroes": true, 00:25:01.050 "zcopy": false, 00:25:01.050 "get_zone_info": false, 00:25:01.050 "zone_management": false, 00:25:01.050 "zone_append": false, 00:25:01.050 "compare": false, 00:25:01.050 "compare_and_write": false, 00:25:01.050 "abort": false, 00:25:01.050 "seek_hole": true, 00:25:01.050 "seek_data": true, 00:25:01.050 "copy": false, 00:25:01.050 "nvme_iov_md": false 00:25:01.050 }, 00:25:01.050 "driver_specific": { 00:25:01.050 "lvol": { 00:25:01.050 "lvol_store_uuid": "adae6a3d-7946-4efe-b75e-4c8178582b7c", 00:25:01.050 "base_bdev": "nvme0n1", 00:25:01.050 "thin_provision": true, 00:25:01.050 "num_allocated_clusters": 0, 00:25:01.050 "snapshot": false, 00:25:01.050 "clone": false, 00:25:01.050 "esnap_clone": false 00:25:01.050 } 00:25:01.050 } 00:25:01.050 } 00:25:01.050 ]' 00:25:01.050 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:01.050 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:01.050 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:01.307 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:01.307 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:01.307 12:56:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:01.307 12:56:00 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:01.308 12:56:00 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7a2481c7-45e5-4f8d-a84f-58887b18f7f4 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:01.565 [2024-12-05 12:56:01.169060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.169119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:01.565 [2024-12-05 12:56:01.169132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:01.565 [2024-12-05 12:56:01.169143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.171264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.171301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:01.565 [2024-12-05 12:56:01.171310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:25:01.565 [2024-12-05 12:56:01.171320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.171405] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:01.565 [2024-12-05 12:56:01.171602] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:01.565 [2024-12-05 12:56:01.171614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.171622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:01.565 [2024-12-05 12:56:01.171630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:25:01.565 [2024-12-05 12:56:01.171639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.171714] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0bd99ffa-8eea-4244-a9f2-db54184048fa 00:25:01.565 [2024-12-05 12:56:01.172999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.173118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:01.565 [2024-12-05 12:56:01.173135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:01.565 [2024-12-05 12:56:01.173143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.179944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.180061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:01.565 [2024-12-05 12:56:01.180076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.738 ms 00:25:01.565 [2024-12-05 12:56:01.180083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.180185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.180194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:01.565 [2024-12-05 12:56:01.180203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:01.565 [2024-12-05 12:56:01.180212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.180249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.180256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:01.565 [2024-12-05 12:56:01.180264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:01.565 [2024-12-05 12:56:01.180270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.180296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:01.565 [2024-12-05 12:56:01.181941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.181967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:01.565 [2024-12-05 12:56:01.181976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:25:01.565 [2024-12-05 12:56:01.181984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.182025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.182034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:01.565 [2024-12-05 12:56:01.182041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:01.565 [2024-12-05 12:56:01.182050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.182074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:01.565 [2024-12-05 12:56:01.182194] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:01.565 [2024-12-05 12:56:01.182204] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:01.565 [2024-12-05 12:56:01.182215] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:01.565 [2024-12-05 12:56:01.182223] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182234] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182240] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:01.565 [2024-12-05 12:56:01.182248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:01.565 [2024-12-05 12:56:01.182254] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:01.565 [2024-12-05 12:56:01.182261] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:01.565 [2024-12-05 12:56:01.182269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.182277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:01.565 [2024-12-05 12:56:01.182283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:25:01.565 [2024-12-05 12:56:01.182291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.182364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.565 [2024-12-05 12:56:01.182374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:01.565 [2024-12-05 12:56:01.182380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:01.565 [2024-12-05 12:56:01.182387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.565 [2024-12-05 12:56:01.182479] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:01.565 [2024-12-05 12:56:01.182490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:01.565 [2024-12-05 12:56:01.182496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:01.565 [2024-12-05 12:56:01.182517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:01.565 [2024-12-05 12:56:01.182534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:01.565 [2024-12-05 12:56:01.182547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:01.565 [2024-12-05 12:56:01.182555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:01.565 [2024-12-05 12:56:01.182560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:01.565 [2024-12-05 12:56:01.182569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:01.565 [2024-12-05 12:56:01.182574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:01.565 [2024-12-05 12:56:01.182582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:01.565 [2024-12-05 12:56:01.182595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:01.565 [2024-12-05 12:56:01.182614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:01.565 [2024-12-05 12:56:01.182634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:01.565 [2024-12-05 12:56:01.182653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:01.565 [2024-12-05 12:56:01.182678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.565 [2024-12-05 12:56:01.182692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:01.565 [2024-12-05 12:56:01.182698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:01.565 [2024-12-05 12:56:01.182711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:01.565 [2024-12-05 12:56:01.182720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:01.565 [2024-12-05 12:56:01.182726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:01.565 [2024-12-05 12:56:01.182734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:01.565 [2024-12-05 12:56:01.182740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:01.565 [2024-12-05 12:56:01.182747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:01.565 [2024-12-05 12:56:01.182760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:01.565 [2024-12-05 12:56:01.182766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.565 [2024-12-05 12:56:01.182774] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:01.566 [2024-12-05 12:56:01.182781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:01.566 [2024-12-05 12:56:01.182790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:01.566 [2024-12-05 12:56:01.182796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.566 [2024-12-05 12:56:01.182820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:01.566 [2024-12-05 12:56:01.182827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:01.566 [2024-12-05 12:56:01.182835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:01.566 [2024-12-05 12:56:01.182841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:01.566 [2024-12-05 12:56:01.182849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:01.566 [2024-12-05 12:56:01.182855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:01.566 [2024-12-05 12:56:01.182866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:01.566 [2024-12-05 12:56:01.182884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:01.566 [2024-12-05 12:56:01.182893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:01.566 [2024-12-05 12:56:01.182899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:01.566 [2024-12-05 12:56:01.182907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:01.566 [2024-12-05 12:56:01.182914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:01.566 [2024-12-05 12:56:01.182923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:01.566 [2024-12-05 12:56:01.182930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:01.566 [2024-12-05 12:56:01.182941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:01.566 [2024-12-05 12:56:01.182947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:01.566 [2024-12-05 12:56:01.182956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:01.566 [2024-12-05 12:56:01.182962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:01.566 [2024-12-05 12:56:01.182970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:01.566 [2024-12-05 12:56:01.182977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:01.566 [2024-12-05 12:56:01.182985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:01.566 [2024-12-05 12:56:01.182991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:01.566 [2024-12-05 12:56:01.182999] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:01.566 [2024-12-05 12:56:01.183008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:01.566 [2024-12-05 12:56:01.183017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:01.566 [2024-12-05 12:56:01.183023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:01.566 [2024-12-05 12:56:01.183031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:01.566 [2024-12-05 12:56:01.183038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:01.566 [2024-12-05 12:56:01.183046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.566 [2024-12-05 12:56:01.183052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:01.566 [2024-12-05 12:56:01.183061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:25:01.566 [2024-12-05 12:56:01.183068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.566 [2024-12-05 12:56:01.183137] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:01.566 [2024-12-05 12:56:01.183149] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:03.573 [2024-12-05 12:56:03.350154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.573 [2024-12-05 12:56:03.350259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:03.573 [2024-12-05 12:56:03.350296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2166.990 ms 00:25:03.573 [2024-12-05 12:56:03.350314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.573 [2024-12-05 12:56:03.363301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.573 [2024-12-05 12:56:03.363538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.573 [2024-12-05 12:56:03.363565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.822 ms 00:25:03.573 [2024-12-05 12:56:03.363574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.573 [2024-12-05 12:56:03.363746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.573 [2024-12-05 12:56:03.363758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:03.573 [2024-12-05 12:56:03.363773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:03.573 [2024-12-05 12:56:03.363781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.573 [2024-12-05 12:56:03.385365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.573 [2024-12-05 12:56:03.385423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.573 [2024-12-05 12:56:03.385440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.503 ms 00:25:03.573 [2024-12-05 12:56:03.385448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.573 [2024-12-05 12:56:03.385575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.573 [2024-12-05 12:56:03.385599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:03.573 [2024-12-05 12:56:03.385610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:03.573 [2024-12-05 12:56:03.385618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.573 [2024-12-05 12:56:03.386074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.574 [2024-12-05 12:56:03.386102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:03.574 [2024-12-05 12:56:03.386130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:25:03.574 [2024-12-05 12:56:03.386141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.574 [2024-12-05 12:56:03.386333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.574 [2024-12-05 12:56:03.386347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:03.574 [2024-12-05 12:56:03.386360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:25:03.574 [2024-12-05 12:56:03.386377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.574 [2024-12-05 12:56:03.393687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.574 [2024-12-05 12:56:03.393738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:03.574 [2024-12-05 12:56:03.393753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.271 ms 00:25:03.574 [2024-12-05 12:56:03.393763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.574 [2024-12-05 12:56:03.403379] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:03.574 [2024-12-05 12:56:03.420677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.574 [2024-12-05 12:56:03.420731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:03.574 [2024-12-05 12:56:03.420744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.748 ms 00:25:03.574 [2024-12-05 12:56:03.420754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.477768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.477868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:03.832 [2024-12-05 12:56:03.477885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.880 ms 00:25:03.832 [2024-12-05 12:56:03.477899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.478118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.478132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:03.832 [2024-12-05 12:56:03.478141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:25:03.832 [2024-12-05 12:56:03.478164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.481295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.481343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:03.832 [2024-12-05 12:56:03.481355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:25:03.832 [2024-12-05 12:56:03.481367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.484142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.484179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:03.832 [2024-12-05 12:56:03.484190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:25:03.832 [2024-12-05 12:56:03.484200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.484513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.484526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:03.832 [2024-12-05 12:56:03.484536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:25:03.832 [2024-12-05 12:56:03.484549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.512084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.512141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:03.832 [2024-12-05 12:56:03.512154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.502 ms 00:25:03.832 [2024-12-05 12:56:03.512168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.516250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.516292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:03.832 [2024-12-05 12:56:03.516304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.998 ms 00:25:03.832 [2024-12-05 12:56:03.516325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.519443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.519481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:03.832 [2024-12-05 12:56:03.519491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:25:03.832 [2024-12-05 12:56:03.519501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.832 [2024-12-05 12:56:03.523043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.832 [2024-12-05 12:56:03.523084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:03.832 [2024-12-05 12:56:03.523096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:25:03.832 [2024-12-05 12:56:03.523110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.833 [2024-12-05 12:56:03.523169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.833 [2024-12-05 12:56:03.523183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:03.833 [2024-12-05 12:56:03.523193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:03.833 [2024-12-05 12:56:03.523204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.833 [2024-12-05 12:56:03.523280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.833 [2024-12-05 12:56:03.523292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:03.833 [2024-12-05 12:56:03.523301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:03.833 [2024-12-05 12:56:03.523311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.833 [2024-12-05 12:56:03.524294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:03.833 [2024-12-05 12:56:03.525327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2354.849 ms, result 0 00:25:03.833 [2024-12-05 12:56:03.526193] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:03.833 { 00:25:03.833 "name": "ftl0", 00:25:03.833 "uuid": "0bd99ffa-8eea-4244-a9f2-db54184048fa" 00:25:03.833 } 00:25:03.833 12:56:03 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:03.833 12:56:03 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:03.833 12:56:03 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:03.833 12:56:03 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:25:03.833 12:56:03 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:03.833 12:56:03 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:03.833 12:56:03 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:04.091 12:56:03 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:04.348 [ 00:25:04.348 { 00:25:04.348 "name": "ftl0", 00:25:04.348 "aliases": [ 00:25:04.348 "0bd99ffa-8eea-4244-a9f2-db54184048fa" 00:25:04.348 ], 00:25:04.348 "product_name": "FTL disk", 00:25:04.348 "block_size": 4096, 00:25:04.348 "num_blocks": 23592960, 00:25:04.348 "uuid": "0bd99ffa-8eea-4244-a9f2-db54184048fa", 00:25:04.348 "assigned_rate_limits": { 00:25:04.348 "rw_ios_per_sec": 0, 00:25:04.348 "rw_mbytes_per_sec": 0, 00:25:04.348 "r_mbytes_per_sec": 0, 00:25:04.348 "w_mbytes_per_sec": 0 00:25:04.348 }, 00:25:04.348 "claimed": false, 00:25:04.348 "zoned": false, 00:25:04.348 "supported_io_types": { 00:25:04.348 "read": true, 00:25:04.348 "write": true, 00:25:04.348 "unmap": true, 00:25:04.348 "flush": true, 00:25:04.348 "reset": false, 00:25:04.348 "nvme_admin": false, 00:25:04.348 "nvme_io": false, 00:25:04.348 "nvme_io_md": false, 00:25:04.348 "write_zeroes": true, 00:25:04.348 "zcopy": false, 00:25:04.348 "get_zone_info": false, 00:25:04.348 "zone_management": false, 00:25:04.348 "zone_append": false, 00:25:04.348 "compare": false, 00:25:04.348 "compare_and_write": false, 00:25:04.348 "abort": false, 00:25:04.348 "seek_hole": false, 00:25:04.348 "seek_data": false, 00:25:04.348 "copy": false, 00:25:04.348 "nvme_iov_md": false 00:25:04.348 }, 00:25:04.348 "driver_specific": { 00:25:04.348 "ftl": { 00:25:04.348 "base_bdev": "7a2481c7-45e5-4f8d-a84f-58887b18f7f4", 00:25:04.348 "cache": "nvc0n1p0" 00:25:04.348 } 00:25:04.348 } 00:25:04.348 } 00:25:04.348 ] 00:25:04.348 12:56:03 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:25:04.348 12:56:03 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:04.348 12:56:03 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:04.348 12:56:04 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:04.348 12:56:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:04.606 12:56:04 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:04.606 { 00:25:04.606 "name": "ftl0", 00:25:04.606 "aliases": [ 00:25:04.606 "0bd99ffa-8eea-4244-a9f2-db54184048fa" 00:25:04.606 ], 00:25:04.606 "product_name": "FTL disk", 00:25:04.606 "block_size": 4096, 00:25:04.606 "num_blocks": 23592960, 00:25:04.606 "uuid": "0bd99ffa-8eea-4244-a9f2-db54184048fa", 00:25:04.606 "assigned_rate_limits": { 00:25:04.606 "rw_ios_per_sec": 0, 00:25:04.606 "rw_mbytes_per_sec": 0, 00:25:04.606 "r_mbytes_per_sec": 0, 00:25:04.606 "w_mbytes_per_sec": 0 00:25:04.606 }, 00:25:04.606 "claimed": false, 00:25:04.606 "zoned": false, 00:25:04.606 "supported_io_types": { 00:25:04.606 "read": true, 00:25:04.606 "write": true, 00:25:04.606 "unmap": true, 00:25:04.606 "flush": true, 00:25:04.606 "reset": false, 00:25:04.606 "nvme_admin": false, 00:25:04.606 "nvme_io": false, 00:25:04.606 "nvme_io_md": false, 00:25:04.606 "write_zeroes": true, 00:25:04.606 "zcopy": false, 00:25:04.606 "get_zone_info": false, 00:25:04.606 "zone_management": false, 00:25:04.606 "zone_append": false, 00:25:04.606 "compare": false, 00:25:04.606 "compare_and_write": false, 00:25:04.606 "abort": false, 00:25:04.606 "seek_hole": false, 00:25:04.606 "seek_data": false, 00:25:04.606 "copy": false, 00:25:04.606 "nvme_iov_md": false 00:25:04.606 }, 00:25:04.606 "driver_specific": { 00:25:04.606 "ftl": { 00:25:04.606 "base_bdev": "7a2481c7-45e5-4f8d-a84f-58887b18f7f4", 00:25:04.606 "cache": "nvc0n1p0" 00:25:04.606 } 00:25:04.606 } 00:25:04.606 } 00:25:04.606 ]' 00:25:04.606 12:56:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:04.606 12:56:04 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:04.606 12:56:04 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:04.866 [2024-12-05 12:56:04.605897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.605964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:04.866 [2024-12-05 12:56:04.605981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:04.866 [2024-12-05 12:56:04.605990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.606026] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:04.866 [2024-12-05 12:56:04.606585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.606612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:04.866 [2024-12-05 12:56:04.606622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:25:04.866 [2024-12-05 12:56:04.606632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.607108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.607133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:04.866 [2024-12-05 12:56:04.607143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:25:04.866 [2024-12-05 12:56:04.607168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.610833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.610859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:04.866 [2024-12-05 12:56:04.610870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.642 ms 00:25:04.866 [2024-12-05 12:56:04.610881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.617874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.617907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:04.866 [2024-12-05 12:56:04.617917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.948 ms 00:25:04.866 [2024-12-05 12:56:04.617931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.619430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.619469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:04.866 [2024-12-05 12:56:04.619479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:25:04.866 [2024-12-05 12:56:04.619488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.623220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.623261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:04.866 [2024-12-05 12:56:04.623272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:25:04.866 [2024-12-05 12:56:04.623284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.623463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.623475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:04.866 [2024-12-05 12:56:04.623484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:25:04.866 [2024-12-05 12:56:04.623495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.625282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.625319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:04.866 [2024-12-05 12:56:04.625330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:25:04.866 [2024-12-05 12:56:04.625343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.626633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.626667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:04.866 [2024-12-05 12:56:04.626677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:25:04.866 [2024-12-05 12:56:04.626686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.627647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.627686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:04.866 [2024-12-05 12:56:04.627696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:25:04.866 [2024-12-05 12:56:04.627705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.628639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.866 [2024-12-05 12:56:04.628672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:04.866 [2024-12-05 12:56:04.628681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:25:04.866 [2024-12-05 12:56:04.628690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.866 [2024-12-05 12:56:04.628727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:04.866 [2024-12-05 12:56:04.628745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.628991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:04.866 [2024-12-05 12:56:04.629124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:04.867 [2024-12-05 12:56:04.629671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:04.867 [2024-12-05 12:56:04.629679] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bd99ffa-8eea-4244-a9f2-db54184048fa 00:25:04.867 [2024-12-05 12:56:04.629690] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:04.867 [2024-12-05 12:56:04.629696] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:04.867 [2024-12-05 12:56:04.629708] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:04.867 [2024-12-05 12:56:04.629716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:04.867 [2024-12-05 12:56:04.629725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:04.867 [2024-12-05 12:56:04.629733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:04.867 [2024-12-05 12:56:04.629742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:04.867 [2024-12-05 12:56:04.629748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:04.867 [2024-12-05 12:56:04.629756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:04.867 [2024-12-05 12:56:04.629763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.867 [2024-12-05 12:56:04.629775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:04.867 [2024-12-05 12:56:04.629782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:25:04.867 [2024-12-05 12:56:04.629794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.867 [2024-12-05 12:56:04.631694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.867 [2024-12-05 12:56:04.631840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:04.867 [2024-12-05 12:56:04.631855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:25:04.867 [2024-12-05 12:56:04.631865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.867 [2024-12-05 12:56:04.631984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.867 [2024-12-05 12:56:04.631997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:04.867 [2024-12-05 12:56:04.632006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:04.867 [2024-12-05 12:56:04.632015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.867 [2024-12-05 12:56:04.638525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.867 [2024-12-05 12:56:04.638566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:04.867 [2024-12-05 12:56:04.638580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.867 [2024-12-05 12:56:04.638591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.867 [2024-12-05 12:56:04.638695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.867 [2024-12-05 12:56:04.638723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:04.867 [2024-12-05 12:56:04.638734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.867 [2024-12-05 12:56:04.638746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.867 [2024-12-05 12:56:04.638819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.867 [2024-12-05 12:56:04.638834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:04.867 [2024-12-05 12:56:04.638841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.867 [2024-12-05 12:56:04.638851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.638880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.638890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:04.868 [2024-12-05 12:56:04.638898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.638907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.651181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.651374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:04.868 [2024-12-05 12:56:04.651391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.651401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.661251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.661433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:04.868 [2024-12-05 12:56:04.661450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.661463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.661552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.661564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:04.868 [2024-12-05 12:56:04.661575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.661600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.661649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.661660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:04.868 [2024-12-05 12:56:04.661668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.661678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.661766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.661779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:04.868 [2024-12-05 12:56:04.661787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.661799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.662028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.662066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:04.868 [2024-12-05 12:56:04.662499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.662529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.662601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.662615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:04.868 [2024-12-05 12:56:04.662625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.662637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.662696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:04.868 [2024-12-05 12:56:04.662708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:04.868 [2024-12-05 12:56:04.662716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:04.868 [2024-12-05 12:56:04.662725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.868 [2024-12-05 12:56:04.662928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.002 ms, result 0 00:25:04.868 true 00:25:04.868 12:56:04 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87723 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87723 ']' 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87723 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87723 00:25:04.868 killing process with pid 87723 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87723' 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87723 00:25:04.868 12:56:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87723 00:25:12.969 12:56:11 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:12.969 65536+0 records in 00:25:12.969 65536+0 records out 00:25:12.969 268435456 bytes (268 MB, 256 MiB) copied, 0.832597 s, 322 MB/s 00:25:12.969 12:56:12 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:12.969 [2024-12-05 12:56:12.456128] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:25:12.969 [2024-12-05 12:56:12.456263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87879 ] 00:25:12.969 [2024-12-05 12:56:12.613416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.969 [2024-12-05 12:56:12.638949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.969 [2024-12-05 12:56:12.743310] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:12.969 [2024-12-05 12:56:12.743390] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.228 [2024-12-05 12:56:12.897715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.897785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:13.228 [2024-12-05 12:56:12.897799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:13.228 [2024-12-05 12:56:12.897822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.900243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.900282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.228 [2024-12-05 12:56:12.900297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.401 ms 00:25:13.228 [2024-12-05 12:56:12.900308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.900470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:13.228 [2024-12-05 12:56:12.900731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:13.228 [2024-12-05 12:56:12.900749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.900757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.228 [2024-12-05 12:56:12.900767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:13.228 [2024-12-05 12:56:12.900774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.902283] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:13.228 [2024-12-05 12:56:12.904856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.904887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:13.228 [2024-12-05 12:56:12.904898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:25:13.228 [2024-12-05 12:56:12.904919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.905033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.905044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:13.228 [2024-12-05 12:56:12.905053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:13.228 [2024-12-05 12:56:12.905061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.911557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.911597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.228 [2024-12-05 12:56:12.911609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.450 ms 00:25:13.228 [2024-12-05 12:56:12.911617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.911785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.911803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.228 [2024-12-05 12:56:12.911829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:13.228 [2024-12-05 12:56:12.911839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.911873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.911882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:13.228 [2024-12-05 12:56:12.911890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:13.228 [2024-12-05 12:56:12.911901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.911925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:13.228 [2024-12-05 12:56:12.913611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.913638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.228 [2024-12-05 12:56:12.913647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.693 ms 00:25:13.228 [2024-12-05 12:56:12.913663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.913707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.913718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:13.228 [2024-12-05 12:56:12.913727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:13.228 [2024-12-05 12:56:12.913734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.913753] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:13.228 [2024-12-05 12:56:12.913774] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:13.228 [2024-12-05 12:56:12.913861] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:13.228 [2024-12-05 12:56:12.913881] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:13.228 [2024-12-05 12:56:12.913986] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:13.228 [2024-12-05 12:56:12.914006] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:13.228 [2024-12-05 12:56:12.914017] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:13.228 [2024-12-05 12:56:12.914027] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:13.228 [2024-12-05 12:56:12.914036] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:13.228 [2024-12-05 12:56:12.914048] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:13.228 [2024-12-05 12:56:12.914055] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:13.228 [2024-12-05 12:56:12.914063] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:13.228 [2024-12-05 12:56:12.914072] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:13.228 [2024-12-05 12:56:12.914082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.914090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:13.228 [2024-12-05 12:56:12.914097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:25:13.228 [2024-12-05 12:56:12.914104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.914197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.228 [2024-12-05 12:56:12.914206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:13.228 [2024-12-05 12:56:12.914214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:13.228 [2024-12-05 12:56:12.914221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.228 [2024-12-05 12:56:12.914322] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:13.228 [2024-12-05 12:56:12.914339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:13.228 [2024-12-05 12:56:12.914348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.228 [2024-12-05 12:56:12.914363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.228 [2024-12-05 12:56:12.914372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:13.228 [2024-12-05 12:56:12.914380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:13.228 [2024-12-05 12:56:12.914388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:13.228 [2024-12-05 12:56:12.914395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:13.229 [2024-12-05 12:56:12.914405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.229 [2024-12-05 12:56:12.914421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:13.229 [2024-12-05 12:56:12.914428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:13.229 [2024-12-05 12:56:12.914436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.229 [2024-12-05 12:56:12.914444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:13.229 [2024-12-05 12:56:12.914454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:13.229 [2024-12-05 12:56:12.914462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:13.229 [2024-12-05 12:56:12.914478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:13.229 [2024-12-05 12:56:12.914486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:13.229 [2024-12-05 12:56:12.914502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.229 [2024-12-05 12:56:12.914517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:13.229 [2024-12-05 12:56:12.914525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.229 [2024-12-05 12:56:12.914544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:13.229 [2024-12-05 12:56:12.914552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.229 [2024-12-05 12:56:12.914566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:13.229 [2024-12-05 12:56:12.914574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.229 [2024-12-05 12:56:12.914589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:13.229 [2024-12-05 12:56:12.914596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.229 [2024-12-05 12:56:12.914612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:13.229 [2024-12-05 12:56:12.914619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:13.229 [2024-12-05 12:56:12.914626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.229 [2024-12-05 12:56:12.914634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:13.229 [2024-12-05 12:56:12.914641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:13.229 [2024-12-05 12:56:12.914649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:13.229 [2024-12-05 12:56:12.914665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:13.229 [2024-12-05 12:56:12.914671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914678] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:13.229 [2024-12-05 12:56:12.914685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:13.229 [2024-12-05 12:56:12.914693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.229 [2024-12-05 12:56:12.914701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.229 [2024-12-05 12:56:12.914708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:13.229 [2024-12-05 12:56:12.914715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:13.229 [2024-12-05 12:56:12.914721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:13.229 [2024-12-05 12:56:12.914728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:13.229 [2024-12-05 12:56:12.914735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:13.229 [2024-12-05 12:56:12.914742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:13.229 [2024-12-05 12:56:12.914749] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:13.229 [2024-12-05 12:56:12.914759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.229 [2024-12-05 12:56:12.914767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:13.229 [2024-12-05 12:56:12.914777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:13.229 [2024-12-05 12:56:12.914784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:13.229 [2024-12-05 12:56:12.914792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:13.229 [2024-12-05 12:56:12.914799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:13.229 [2024-12-05 12:56:12.914817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:13.229 [2024-12-05 12:56:12.914825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:13.229 [2024-12-05 12:56:12.914832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:13.229 [2024-12-05 12:56:12.914839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:13.229 [2024-12-05 12:56:12.914846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:13.229 [2024-12-05 12:56:12.914853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:13.229 [2024-12-05 12:56:12.914860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:13.229 [2024-12-05 12:56:12.914868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:13.229 [2024-12-05 12:56:12.914875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:13.229 [2024-12-05 12:56:12.914882] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:13.229 [2024-12-05 12:56:12.914893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.229 [2024-12-05 12:56:12.914900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:13.229 [2024-12-05 12:56:12.914910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:13.229 [2024-12-05 12:56:12.914918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:13.229 [2024-12-05 12:56:12.914925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:13.229 [2024-12-05 12:56:12.914932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.229 [2024-12-05 12:56:12.914940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:13.229 [2024-12-05 12:56:12.914951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:25:13.229 [2024-12-05 12:56:12.914958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.229 [2024-12-05 12:56:12.926488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.229 [2024-12-05 12:56:12.926532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.229 [2024-12-05 12:56:12.926546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.477 ms 00:25:13.229 [2024-12-05 12:56:12.926554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.229 [2024-12-05 12:56:12.926723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.229 [2024-12-05 12:56:12.926738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:13.229 [2024-12-05 12:56:12.926747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:13.229 [2024-12-05 12:56:12.926755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.229 [2024-12-05 12:56:12.945594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.229 [2024-12-05 12:56:12.945667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.229 [2024-12-05 12:56:12.945686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.811 ms 00:25:13.229 [2024-12-05 12:56:12.945698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.229 [2024-12-05 12:56:12.945875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.229 [2024-12-05 12:56:12.945894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.229 [2024-12-05 12:56:12.945907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:13.229 [2024-12-05 12:56:12.945918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.229 [2024-12-05 12:56:12.946383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.229 [2024-12-05 12:56:12.946414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.229 [2024-12-05 12:56:12.946429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:25:13.229 [2024-12-05 12:56:12.946442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.229 [2024-12-05 12:56:12.946640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.229 [2024-12-05 12:56:12.946667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.229 [2024-12-05 12:56:12.946680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:25:13.229 [2024-12-05 12:56:12.946695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.229 [2024-12-05 12:56:12.954442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.229 [2024-12-05 12:56:12.954484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.229 [2024-12-05 12:56:12.954495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.716 ms 00:25:13.229 [2024-12-05 12:56:12.954507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.229 [2024-12-05 12:56:12.957295] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:13.229 [2024-12-05 12:56:12.957336] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:13.230 [2024-12-05 12:56:12.957350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:12.957358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:13.230 [2024-12-05 12:56:12.957368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:25:13.230 [2024-12-05 12:56:12.957375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:12.972057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:12.972127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:13.230 [2024-12-05 12:56:12.972158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.554 ms 00:25:13.230 [2024-12-05 12:56:12.972167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:12.974845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:12.974884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:13.230 [2024-12-05 12:56:12.974895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.541 ms 00:25:13.230 [2024-12-05 12:56:12.974903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:12.976389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:12.976428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:13.230 [2024-12-05 12:56:12.976438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:25:13.230 [2024-12-05 12:56:12.976445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:12.976823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:12.976844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:13.230 [2024-12-05 12:56:12.976854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:13.230 [2024-12-05 12:56:12.976862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:12.994658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:12.994736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:13.230 [2024-12-05 12:56:12.994751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.771 ms 00:25:13.230 [2024-12-05 12:56:12.994759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:13.003196] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:13.230 [2024-12-05 12:56:13.020735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:13.020820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:13.230 [2024-12-05 12:56:13.020846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.825 ms 00:25:13.230 [2024-12-05 12:56:13.020855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:13.020999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:13.021013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:13.230 [2024-12-05 12:56:13.021022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:13.230 [2024-12-05 12:56:13.021037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:13.021095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:13.021104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:13.230 [2024-12-05 12:56:13.021113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:13.230 [2024-12-05 12:56:13.021120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:13.021147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:13.021159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:13.230 [2024-12-05 12:56:13.021168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:13.230 [2024-12-05 12:56:13.021175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:13.021214] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:13.230 [2024-12-05 12:56:13.021225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:13.021243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:13.230 [2024-12-05 12:56:13.021251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:13.230 [2024-12-05 12:56:13.021259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:13.025062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:13.025109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:13.230 [2024-12-05 12:56:13.025123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.777 ms 00:25:13.230 [2024-12-05 12:56:13.025133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:13.025249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.230 [2024-12-05 12:56:13.025260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:13.230 [2024-12-05 12:56:13.025276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:13.230 [2024-12-05 12:56:13.025285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.230 [2024-12-05 12:56:13.026201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:13.230 [2024-12-05 12:56:13.027216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.186 ms, result 0 00:25:13.230 [2024-12-05 12:56:13.027714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:13.230 [2024-12-05 12:56:13.037652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:14.603  [2024-12-05T12:56:15.387Z] Copying: 41/256 [MB] (41 MBps) [2024-12-05T12:56:16.319Z] Copying: 79/256 [MB] (38 MBps) [2024-12-05T12:56:17.250Z] Copying: 119/256 [MB] (40 MBps) [2024-12-05T12:56:18.184Z] Copying: 157/256 [MB] (38 MBps) [2024-12-05T12:56:19.117Z] Copying: 195/256 [MB] (38 MBps) [2024-12-05T12:56:19.684Z] Copying: 236/256 [MB] (40 MBps) [2024-12-05T12:56:19.684Z] Copying: 256/256 [MB] (average 39 MBps)[2024-12-05 12:56:19.521367] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:19.832 [2024-12-05 12:56:19.522777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.522838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:19.832 [2024-12-05 12:56:19.522853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:19.832 [2024-12-05 12:56:19.522863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.522886] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:19.832 [2024-12-05 12:56:19.523431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.523453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:19.832 [2024-12-05 12:56:19.523463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:25:19.832 [2024-12-05 12:56:19.523472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.524839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.524869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:19.832 [2024-12-05 12:56:19.524879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:25:19.832 [2024-12-05 12:56:19.524897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.531205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.531250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:19.832 [2024-12-05 12:56:19.531261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.289 ms 00:25:19.832 [2024-12-05 12:56:19.531269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.538308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.538344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:19.832 [2024-12-05 12:56:19.538367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.978 ms 00:25:19.832 [2024-12-05 12:56:19.538379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.540182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.540217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:19.832 [2024-12-05 12:56:19.540226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:25:19.832 [2024-12-05 12:56:19.540235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.544198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.544244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:19.832 [2024-12-05 12:56:19.544255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:25:19.832 [2024-12-05 12:56:19.544263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.544392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.544403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:19.832 [2024-12-05 12:56:19.544412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:19.832 [2024-12-05 12:56:19.544423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.546231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.546261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:19.832 [2024-12-05 12:56:19.546270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:25:19.832 [2024-12-05 12:56:19.546278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.547393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.547420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:19.832 [2024-12-05 12:56:19.547429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:25:19.832 [2024-12-05 12:56:19.547436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.832 [2024-12-05 12:56:19.548526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.832 [2024-12-05 12:56:19.548555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:19.832 [2024-12-05 12:56:19.548564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:25:19.832 [2024-12-05 12:56:19.548572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.833 [2024-12-05 12:56:19.549624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.833 [2024-12-05 12:56:19.549654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:19.833 [2024-12-05 12:56:19.549665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:25:19.833 [2024-12-05 12:56:19.549672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.833 [2024-12-05 12:56:19.549701] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:19.833 [2024-12-05 12:56:19.549716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.549994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:19.833 [2024-12-05 12:56:19.550385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:19.834 [2024-12-05 12:56:19.550536] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:19.834 [2024-12-05 12:56:19.550544] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bd99ffa-8eea-4244-a9f2-db54184048fa 00:25:19.834 [2024-12-05 12:56:19.550553] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:19.834 [2024-12-05 12:56:19.550560] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:19.834 [2024-12-05 12:56:19.550567] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:19.834 [2024-12-05 12:56:19.550576] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:19.834 [2024-12-05 12:56:19.550583] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:19.834 [2024-12-05 12:56:19.550591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:19.834 [2024-12-05 12:56:19.550609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:19.834 [2024-12-05 12:56:19.550615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:19.834 [2024-12-05 12:56:19.550622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:19.834 [2024-12-05 12:56:19.550628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.834 [2024-12-05 12:56:19.550636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:19.834 [2024-12-05 12:56:19.550644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:25:19.834 [2024-12-05 12:56:19.550651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.552537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.834 [2024-12-05 12:56:19.552556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:19.834 [2024-12-05 12:56:19.552566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:25:19.834 [2024-12-05 12:56:19.552575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.552702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.834 [2024-12-05 12:56:19.552711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:19.834 [2024-12-05 12:56:19.552720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:19.834 [2024-12-05 12:56:19.552731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.559036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.559090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:19.834 [2024-12-05 12:56:19.559103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.559118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.559197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.559212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:19.834 [2024-12-05 12:56:19.559220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.559228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.559275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.559284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:19.834 [2024-12-05 12:56:19.559293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.559300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.559321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.559329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:19.834 [2024-12-05 12:56:19.559337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.559345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.571436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.571503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:19.834 [2024-12-05 12:56:19.571516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.571524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.581003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.581064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:19.834 [2024-12-05 12:56:19.581077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.581085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.581131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.581140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:19.834 [2024-12-05 12:56:19.581149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.581157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.581187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.581204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:19.834 [2024-12-05 12:56:19.581212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.581220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.581305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.581315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:19.834 [2024-12-05 12:56:19.581323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.581331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.581360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.581370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:19.834 [2024-12-05 12:56:19.581383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.581390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.581430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.581439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:19.834 [2024-12-05 12:56:19.581448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.581455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.581505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:19.834 [2024-12-05 12:56:19.581524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:19.834 [2024-12-05 12:56:19.581532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:19.834 [2024-12-05 12:56:19.581540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.834 [2024-12-05 12:56:19.581704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.902 ms, result 0 00:25:20.401 00:25:20.401 00:25:20.401 12:56:20 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87959 00:25:20.401 12:56:20 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:20.401 12:56:20 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87959 00:25:20.401 12:56:20 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87959 ']' 00:25:20.401 12:56:20 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.401 12:56:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.401 12:56:20 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.401 12:56:20 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.401 12:56:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:20.401 [2024-12-05 12:56:20.242068] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:25:20.401 [2024-12-05 12:56:20.242244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87959 ] 00:25:20.659 [2024-12-05 12:56:20.403593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.659 [2024-12-05 12:56:20.428660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.595 12:56:21 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.595 12:56:21 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:21.595 12:56:21 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:21.595 [2024-12-05 12:56:21.332536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:21.595 [2024-12-05 12:56:21.332637] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:21.857 [2024-12-05 12:56:21.506942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.507013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:21.857 [2024-12-05 12:56:21.507028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:21.857 [2024-12-05 12:56:21.507040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.509584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.509636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:21.857 [2024-12-05 12:56:21.509648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:25:21.857 [2024-12-05 12:56:21.509658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.509757] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:21.857 [2024-12-05 12:56:21.510451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:21.857 [2024-12-05 12:56:21.510510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.510526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:21.857 [2024-12-05 12:56:21.510537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:25:21.857 [2024-12-05 12:56:21.510552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.512060] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:21.857 [2024-12-05 12:56:21.515172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.515216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:21.857 [2024-12-05 12:56:21.515231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:25:21.857 [2024-12-05 12:56:21.515241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.515338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.515350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:21.857 [2024-12-05 12:56:21.515365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:21.857 [2024-12-05 12:56:21.515374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.522073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.522124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:21.857 [2024-12-05 12:56:21.522137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.639 ms 00:25:21.857 [2024-12-05 12:56:21.522145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.522286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.522297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:21.857 [2024-12-05 12:56:21.522309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:21.857 [2024-12-05 12:56:21.522319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.522356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.522368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:21.857 [2024-12-05 12:56:21.522378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:21.857 [2024-12-05 12:56:21.522385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.522414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:21.857 [2024-12-05 12:56:21.524245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.524281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:21.857 [2024-12-05 12:56:21.524293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:25:21.857 [2024-12-05 12:56:21.524302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.524345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.524355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:21.857 [2024-12-05 12:56:21.524364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:21.857 [2024-12-05 12:56:21.524377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.524399] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:21.857 [2024-12-05 12:56:21.524423] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:21.857 [2024-12-05 12:56:21.524468] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:21.857 [2024-12-05 12:56:21.524493] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:21.857 [2024-12-05 12:56:21.524599] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:21.857 [2024-12-05 12:56:21.524611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:21.857 [2024-12-05 12:56:21.524622] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:21.857 [2024-12-05 12:56:21.524634] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:21.857 [2024-12-05 12:56:21.524643] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:21.857 [2024-12-05 12:56:21.524655] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:21.857 [2024-12-05 12:56:21.524662] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:21.857 [2024-12-05 12:56:21.524673] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:21.857 [2024-12-05 12:56:21.524686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:21.857 [2024-12-05 12:56:21.524705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.524717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:21.857 [2024-12-05 12:56:21.524727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:25:21.857 [2024-12-05 12:56:21.524735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.524846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.857 [2024-12-05 12:56:21.524857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:21.857 [2024-12-05 12:56:21.524867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:21.857 [2024-12-05 12:56:21.524878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.857 [2024-12-05 12:56:21.524991] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:21.857 [2024-12-05 12:56:21.525007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:21.857 [2024-12-05 12:56:21.525017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:21.857 [2024-12-05 12:56:21.525025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:21.857 [2024-12-05 12:56:21.525050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:21.857 [2024-12-05 12:56:21.525066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:21.857 [2024-12-05 12:56:21.525074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:21.857 [2024-12-05 12:56:21.525089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:21.857 [2024-12-05 12:56:21.525096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:21.857 [2024-12-05 12:56:21.525105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:21.857 [2024-12-05 12:56:21.525111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:21.857 [2024-12-05 12:56:21.525121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:21.857 [2024-12-05 12:56:21.525132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:21.857 [2024-12-05 12:56:21.525156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:21.857 [2024-12-05 12:56:21.525164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:21.857 [2024-12-05 12:56:21.525181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:21.857 [2024-12-05 12:56:21.525203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:21.857 [2024-12-05 12:56:21.525211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:21.857 [2024-12-05 12:56:21.525226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:21.857 [2024-12-05 12:56:21.525234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:21.857 [2024-12-05 12:56:21.525271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:21.857 [2024-12-05 12:56:21.525278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:21.857 [2024-12-05 12:56:21.525286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:21.858 [2024-12-05 12:56:21.525293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:21.858 [2024-12-05 12:56:21.525302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:21.858 [2024-12-05 12:56:21.525309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:21.858 [2024-12-05 12:56:21.525317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:21.858 [2024-12-05 12:56:21.525323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:21.858 [2024-12-05 12:56:21.525334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:21.858 [2024-12-05 12:56:21.525341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:21.858 [2024-12-05 12:56:21.525349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:21.858 [2024-12-05 12:56:21.525356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.858 [2024-12-05 12:56:21.525364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:21.858 [2024-12-05 12:56:21.525371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:21.858 [2024-12-05 12:56:21.525379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.858 [2024-12-05 12:56:21.525385] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:21.858 [2024-12-05 12:56:21.525395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:21.858 [2024-12-05 12:56:21.525406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:21.858 [2024-12-05 12:56:21.525419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.858 [2024-12-05 12:56:21.525427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:21.858 [2024-12-05 12:56:21.525435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:21.858 [2024-12-05 12:56:21.525442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:21.858 [2024-12-05 12:56:21.525451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:21.858 [2024-12-05 12:56:21.525462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:21.858 [2024-12-05 12:56:21.525477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:21.858 [2024-12-05 12:56:21.525491] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:21.858 [2024-12-05 12:56:21.525510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:21.858 [2024-12-05 12:56:21.525524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:21.858 [2024-12-05 12:56:21.525533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:21.858 [2024-12-05 12:56:21.525540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:21.858 [2024-12-05 12:56:21.525549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:21.858 [2024-12-05 12:56:21.525556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:21.858 [2024-12-05 12:56:21.525565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:21.858 [2024-12-05 12:56:21.525573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:21.858 [2024-12-05 12:56:21.525581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:21.858 [2024-12-05 12:56:21.525589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:21.858 [2024-12-05 12:56:21.525598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:21.858 [2024-12-05 12:56:21.525605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:21.858 [2024-12-05 12:56:21.525614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:21.858 [2024-12-05 12:56:21.525621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:21.858 [2024-12-05 12:56:21.525632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:21.858 [2024-12-05 12:56:21.525639] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:21.858 [2024-12-05 12:56:21.525650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:21.858 [2024-12-05 12:56:21.525661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:21.858 [2024-12-05 12:56:21.525674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:21.858 [2024-12-05 12:56:21.525687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:21.858 [2024-12-05 12:56:21.525701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:21.858 [2024-12-05 12:56:21.525715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.525729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:21.858 [2024-12-05 12:56:21.525743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:25:21.858 [2024-12-05 12:56:21.525759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.537681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.537733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.858 [2024-12-05 12:56:21.537752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.838 ms 00:25:21.858 [2024-12-05 12:56:21.537764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.537959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.537979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:21.858 [2024-12-05 12:56:21.537988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:21.858 [2024-12-05 12:56:21.538001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.548885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.548934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.858 [2024-12-05 12:56:21.548947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.861 ms 00:25:21.858 [2024-12-05 12:56:21.548959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.549060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.549073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.858 [2024-12-05 12:56:21.549086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:21.858 [2024-12-05 12:56:21.549096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.549549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.549586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.858 [2024-12-05 12:56:21.549596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:25:21.858 [2024-12-05 12:56:21.549606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.549752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.549775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.858 [2024-12-05 12:56:21.549788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:25:21.858 [2024-12-05 12:56:21.549821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.556734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.556783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.858 [2024-12-05 12:56:21.556795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.886 ms 00:25:21.858 [2024-12-05 12:56:21.556816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.572348] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:21.858 [2024-12-05 12:56:21.572421] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:21.858 [2024-12-05 12:56:21.572437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.572448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:21.858 [2024-12-05 12:56:21.572462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.497 ms 00:25:21.858 [2024-12-05 12:56:21.572472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.591876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.591967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:21.858 [2024-12-05 12:56:21.591982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.338 ms 00:25:21.858 [2024-12-05 12:56:21.591995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.595168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.595223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:21.858 [2024-12-05 12:56:21.595235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.023 ms 00:25:21.858 [2024-12-05 12:56:21.595244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 Some configs were skipped because the RPC state that can call them passed over. 00:25:21.858 [2024-12-05 12:56:21.597277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.597316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:21.858 [2024-12-05 12:56:21.597327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:25:21.858 [2024-12-05 12:56:21.597336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.597686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.597710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:21.858 [2024-12-05 12:56:21.597720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:25:21.858 [2024-12-05 12:56:21.597732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.858 [2024-12-05 12:56:21.617364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.858 [2024-12-05 12:56:21.617451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:21.858 [2024-12-05 12:56:21.617466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.606 ms 00:25:21.859 [2024-12-05 12:56:21.617480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.859 [2024-12-05 12:56:21.625950] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:21.859 [2024-12-05 12:56:21.643916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.859 [2024-12-05 12:56:21.643981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:21.859 [2024-12-05 12:56:21.643999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.295 ms 00:25:21.859 [2024-12-05 12:56:21.644008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.859 [2024-12-05 12:56:21.644135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.859 [2024-12-05 12:56:21.644149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:21.859 [2024-12-05 12:56:21.644161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:21.859 [2024-12-05 12:56:21.644169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.859 [2024-12-05 12:56:21.644229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.859 [2024-12-05 12:56:21.644238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:21.859 [2024-12-05 12:56:21.644248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:21.859 [2024-12-05 12:56:21.644261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.859 [2024-12-05 12:56:21.644286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.859 [2024-12-05 12:56:21.644294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:21.859 [2024-12-05 12:56:21.644312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:21.859 [2024-12-05 12:56:21.644320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.859 [2024-12-05 12:56:21.644356] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:21.859 [2024-12-05 12:56:21.644366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.859 [2024-12-05 12:56:21.644375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:21.859 [2024-12-05 12:56:21.644383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:21.859 [2024-12-05 12:56:21.644392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.859 [2024-12-05 12:56:21.648400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.859 [2024-12-05 12:56:21.648455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:21.859 [2024-12-05 12:56:21.648467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.986 ms 00:25:21.859 [2024-12-05 12:56:21.648482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.859 [2024-12-05 12:56:21.648588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.859 [2024-12-05 12:56:21.648602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:21.859 [2024-12-05 12:56:21.648613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:21.859 [2024-12-05 12:56:21.648624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.859 [2024-12-05 12:56:21.649582] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:21.859 [2024-12-05 12:56:21.650624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.340 ms, result 0 00:25:21.859 [2024-12-05 12:56:21.651460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:21.859 12:56:21 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:22.121 [2024-12-05 12:56:21.878336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.121 [2024-12-05 12:56:21.878415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:22.121 [2024-12-05 12:56:21.878437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:25:22.121 [2024-12-05 12:56:21.878446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.121 [2024-12-05 12:56:21.878485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.478 ms, result 0 00:25:22.121 true 00:25:22.121 12:56:21 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:22.380 [2024-12-05 12:56:22.094331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.380 [2024-12-05 12:56:22.094400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:22.380 [2024-12-05 12:56:22.094413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:25:22.380 [2024-12-05 12:56:22.094423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.380 [2024-12-05 12:56:22.094462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.200 ms, result 0 00:25:22.380 true 00:25:22.380 12:56:22 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87959 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87959 ']' 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87959 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87959 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87959' 00:25:22.380 killing process with pid 87959 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87959 00:25:22.380 12:56:22 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87959 00:25:22.645 [2024-12-05 12:56:22.266308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.266381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:22.645 [2024-12-05 12:56:22.266397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:22.645 [2024-12-05 12:56:22.266411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.645 [2024-12-05 12:56:22.266439] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:22.645 [2024-12-05 12:56:22.267033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.267073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:22.645 [2024-12-05 12:56:22.267083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:25:22.645 [2024-12-05 12:56:22.267093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.645 [2024-12-05 12:56:22.267413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.267440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:22.645 [2024-12-05 12:56:22.267450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:25:22.645 [2024-12-05 12:56:22.267459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.645 [2024-12-05 12:56:22.271462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.271506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:22.645 [2024-12-05 12:56:22.271517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:25:22.645 [2024-12-05 12:56:22.271529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.645 [2024-12-05 12:56:22.278608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.278668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:22.645 [2024-12-05 12:56:22.278681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.043 ms 00:25:22.645 [2024-12-05 12:56:22.278695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.645 [2024-12-05 12:56:22.280448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.280510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:22.645 [2024-12-05 12:56:22.280521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:25:22.645 [2024-12-05 12:56:22.280531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.645 [2024-12-05 12:56:22.284016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.284064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:22.645 [2024-12-05 12:56:22.284077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:25:22.645 [2024-12-05 12:56:22.284088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.645 [2024-12-05 12:56:22.284227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.284247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:22.645 [2024-12-05 12:56:22.284262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:22.645 [2024-12-05 12:56:22.284272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.645 [2024-12-05 12:56:22.286132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.645 [2024-12-05 12:56:22.286175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:22.645 [2024-12-05 12:56:22.286184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:25:22.646 [2024-12-05 12:56:22.286198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.646 [2024-12-05 12:56:22.287353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.646 [2024-12-05 12:56:22.287389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:22.646 [2024-12-05 12:56:22.287400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:25:22.646 [2024-12-05 12:56:22.287410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.646 [2024-12-05 12:56:22.288408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.646 [2024-12-05 12:56:22.288444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:22.646 [2024-12-05 12:56:22.288453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:25:22.646 [2024-12-05 12:56:22.288462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.646 [2024-12-05 12:56:22.289485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.646 [2024-12-05 12:56:22.289525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:22.646 [2024-12-05 12:56:22.289535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:25:22.646 [2024-12-05 12:56:22.289546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.646 [2024-12-05 12:56:22.289579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:22.646 [2024-12-05 12:56:22.289599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.289996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:22.646 [2024-12-05 12:56:22.290406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:22.647 [2024-12-05 12:56:22.290648] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:22.647 [2024-12-05 12:56:22.290657] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bd99ffa-8eea-4244-a9f2-db54184048fa 00:25:22.647 [2024-12-05 12:56:22.290670] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:22.647 [2024-12-05 12:56:22.290678] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:22.647 [2024-12-05 12:56:22.290687] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:22.647 [2024-12-05 12:56:22.290695] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:22.647 [2024-12-05 12:56:22.290703] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:22.647 [2024-12-05 12:56:22.290714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:22.647 [2024-12-05 12:56:22.290723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:22.647 [2024-12-05 12:56:22.290729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:22.647 [2024-12-05 12:56:22.290743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:22.647 [2024-12-05 12:56:22.290755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.647 [2024-12-05 12:56:22.290770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:22.647 [2024-12-05 12:56:22.290782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:25:22.647 [2024-12-05 12:56:22.290792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.292710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.647 [2024-12-05 12:56:22.292744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:22.647 [2024-12-05 12:56:22.292756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:25:22.647 [2024-12-05 12:56:22.292767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.292941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.647 [2024-12-05 12:56:22.292967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:22.647 [2024-12-05 12:56:22.292977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:22.647 [2024-12-05 12:56:22.292988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.299695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.299757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:22.647 [2024-12-05 12:56:22.299769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.299788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.299940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.299963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:22.647 [2024-12-05 12:56:22.299978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.299995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.300054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.300083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:22.647 [2024-12-05 12:56:22.300101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.300110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.300130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.300141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:22.647 [2024-12-05 12:56:22.300148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.300157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.312968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.313048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:22.647 [2024-12-05 12:56:22.313061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.313071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.322505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.322586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:22.647 [2024-12-05 12:56:22.322600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.322613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.322690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.322702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:22.647 [2024-12-05 12:56:22.322712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.322725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.322769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.322785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:22.647 [2024-12-05 12:56:22.322793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.322803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.323012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.323029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:22.647 [2024-12-05 12:56:22.323037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.323048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.323094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.323131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:22.647 [2024-12-05 12:56:22.323139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.323154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.323234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.323256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:22.647 [2024-12-05 12:56:22.323265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.323275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.323337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.647 [2024-12-05 12:56:22.323358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:22.647 [2024-12-05 12:56:22.323372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.647 [2024-12-05 12:56:22.323383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.647 [2024-12-05 12:56:22.323548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.208 ms, result 0 00:25:22.905 12:56:22 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:22.905 12:56:22 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:23.164 [2024-12-05 12:56:22.777233] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:25:23.164 [2024-12-05 12:56:22.777396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88001 ] 00:25:23.164 [2024-12-05 12:56:22.933428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.164 [2024-12-05 12:56:22.959194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.423 [2024-12-05 12:56:23.064657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.423 [2024-12-05 12:56:23.064744] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.423 [2024-12-05 12:56:23.219679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.219756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:23.423 [2024-12-05 12:56:23.219770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:23.423 [2024-12-05 12:56:23.219779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.222304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.222360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.423 [2024-12-05 12:56:23.222373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:25:23.423 [2024-12-05 12:56:23.222382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.222594] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:23.423 [2024-12-05 12:56:23.222896] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:23.423 [2024-12-05 12:56:23.222928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.222937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.423 [2024-12-05 12:56:23.222950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:25:23.423 [2024-12-05 12:56:23.222958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.224555] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:23.423 [2024-12-05 12:56:23.227184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.227226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:23.423 [2024-12-05 12:56:23.227238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:25:23.423 [2024-12-05 12:56:23.227250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.227344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.227366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:23.423 [2024-12-05 12:56:23.227376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:23.423 [2024-12-05 12:56:23.227388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.234011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.234070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.423 [2024-12-05 12:56:23.234082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.560 ms 00:25:23.423 [2024-12-05 12:56:23.234090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.234285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.234305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.423 [2024-12-05 12:56:23.234316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:23.423 [2024-12-05 12:56:23.234328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.234366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.234381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:23.423 [2024-12-05 12:56:23.234396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:23.423 [2024-12-05 12:56:23.234404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.234430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:23.423 [2024-12-05 12:56:23.236154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.236195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.423 [2024-12-05 12:56:23.236205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:25:23.423 [2024-12-05 12:56:23.236218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.236268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.236280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:23.423 [2024-12-05 12:56:23.236293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:23.423 [2024-12-05 12:56:23.236304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.236332] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:23.423 [2024-12-05 12:56:23.236361] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:23.423 [2024-12-05 12:56:23.236413] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:23.423 [2024-12-05 12:56:23.236447] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:23.423 [2024-12-05 12:56:23.236588] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:23.423 [2024-12-05 12:56:23.236608] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:23.423 [2024-12-05 12:56:23.236625] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:23.423 [2024-12-05 12:56:23.236643] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:23.423 [2024-12-05 12:56:23.236658] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:23.423 [2024-12-05 12:56:23.236666] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:23.423 [2024-12-05 12:56:23.236674] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:23.423 [2024-12-05 12:56:23.236682] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:23.423 [2024-12-05 12:56:23.236692] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:23.423 [2024-12-05 12:56:23.236706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.236715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:23.423 [2024-12-05 12:56:23.236727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:25:23.423 [2024-12-05 12:56:23.236734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.236853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.423 [2024-12-05 12:56:23.236868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:23.423 [2024-12-05 12:56:23.236881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:25:23.423 [2024-12-05 12:56:23.236894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.423 [2024-12-05 12:56:23.237012] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:23.423 [2024-12-05 12:56:23.237039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:23.423 [2024-12-05 12:56:23.237048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:23.424 [2024-12-05 12:56:23.237072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:23.424 [2024-12-05 12:56:23.237096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.424 [2024-12-05 12:56:23.237109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:23.424 [2024-12-05 12:56:23.237115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:23.424 [2024-12-05 12:56:23.237122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.424 [2024-12-05 12:56:23.237129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:23.424 [2024-12-05 12:56:23.237140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:23.424 [2024-12-05 12:56:23.237152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:23.424 [2024-12-05 12:56:23.237168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:23.424 [2024-12-05 12:56:23.237189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:23.424 [2024-12-05 12:56:23.237222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:23.424 [2024-12-05 12:56:23.237264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:23.424 [2024-12-05 12:56:23.237285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:23.424 [2024-12-05 12:56:23.237306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.424 [2024-12-05 12:56:23.237319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:23.424 [2024-12-05 12:56:23.237329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:23.424 [2024-12-05 12:56:23.237341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.424 [2024-12-05 12:56:23.237352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:23.424 [2024-12-05 12:56:23.237364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:23.424 [2024-12-05 12:56:23.237377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:23.424 [2024-12-05 12:56:23.237393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:23.424 [2024-12-05 12:56:23.237400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:23.424 [2024-12-05 12:56:23.237418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:23.424 [2024-12-05 12:56:23.237425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.424 [2024-12-05 12:56:23.237445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:23.424 [2024-12-05 12:56:23.237457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:23.424 [2024-12-05 12:56:23.237469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:23.424 [2024-12-05 12:56:23.237482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:23.424 [2024-12-05 12:56:23.237493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:23.424 [2024-12-05 12:56:23.237505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:23.424 [2024-12-05 12:56:23.237518] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:23.424 [2024-12-05 12:56:23.237530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.424 [2024-12-05 12:56:23.237544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:23.424 [2024-12-05 12:56:23.237562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:23.424 [2024-12-05 12:56:23.237570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:23.424 [2024-12-05 12:56:23.237578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:23.424 [2024-12-05 12:56:23.237586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:23.424 [2024-12-05 12:56:23.237593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:23.424 [2024-12-05 12:56:23.237600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:23.424 [2024-12-05 12:56:23.237608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:23.424 [2024-12-05 12:56:23.237615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:23.424 [2024-12-05 12:56:23.237622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:23.424 [2024-12-05 12:56:23.237631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:23.424 [2024-12-05 12:56:23.237640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:23.424 [2024-12-05 12:56:23.237653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:23.424 [2024-12-05 12:56:23.237667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:23.424 [2024-12-05 12:56:23.237680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:23.424 [2024-12-05 12:56:23.237701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.424 [2024-12-05 12:56:23.237710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:23.424 [2024-12-05 12:56:23.237720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:23.424 [2024-12-05 12:56:23.237727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:23.424 [2024-12-05 12:56:23.237735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:23.424 [2024-12-05 12:56:23.237744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.424 [2024-12-05 12:56:23.237753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:23.424 [2024-12-05 12:56:23.237765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:25:23.424 [2024-12-05 12:56:23.237774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.424 [2024-12-05 12:56:23.249513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.424 [2024-12-05 12:56:23.249575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.424 [2024-12-05 12:56:23.249590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.651 ms 00:25:23.424 [2024-12-05 12:56:23.249599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.424 [2024-12-05 12:56:23.249801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.424 [2024-12-05 12:56:23.249862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:23.424 [2024-12-05 12:56:23.249872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:23.424 [2024-12-05 12:56:23.249879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.682 [2024-12-05 12:56:23.275668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.682 [2024-12-05 12:56:23.275775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.682 [2024-12-05 12:56:23.275836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.744 ms 00:25:23.682 [2024-12-05 12:56:23.275856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.682 [2024-12-05 12:56:23.276127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.682 [2024-12-05 12:56:23.276180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.682 [2024-12-05 12:56:23.276218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:23.682 [2024-12-05 12:56:23.276241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.682 [2024-12-05 12:56:23.276870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.682 [2024-12-05 12:56:23.276922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.682 [2024-12-05 12:56:23.276943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:25:23.682 [2024-12-05 12:56:23.276962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.682 [2024-12-05 12:56:23.277318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.682 [2024-12-05 12:56:23.277367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.682 [2024-12-05 12:56:23.277382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:25:23.682 [2024-12-05 12:56:23.277391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.682 [2024-12-05 12:56:23.284303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.682 [2024-12-05 12:56:23.284355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.682 [2024-12-05 12:56:23.284366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.885 ms 00:25:23.682 [2024-12-05 12:56:23.284379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.682 [2024-12-05 12:56:23.287122] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:23.682 [2024-12-05 12:56:23.287165] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:23.682 [2024-12-05 12:56:23.287178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.682 [2024-12-05 12:56:23.287187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:23.682 [2024-12-05 12:56:23.287197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:25:23.682 [2024-12-05 12:56:23.287204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.682 [2024-12-05 12:56:23.301802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.682 [2024-12-05 12:56:23.301891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:23.683 [2024-12-05 12:56:23.301906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.497 ms 00:25:23.683 [2024-12-05 12:56:23.301915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.304424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.304467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:23.683 [2024-12-05 12:56:23.304479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.370 ms 00:25:23.683 [2024-12-05 12:56:23.304487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.305921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.305968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:23.683 [2024-12-05 12:56:23.305978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:25:23.683 [2024-12-05 12:56:23.305986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.306386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.306416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:23.683 [2024-12-05 12:56:23.306427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:25:23.683 [2024-12-05 12:56:23.306435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.325374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.325448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:23.683 [2024-12-05 12:56:23.325462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.906 ms 00:25:23.683 [2024-12-05 12:56:23.325471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.333861] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:23.683 [2024-12-05 12:56:23.352588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.352658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:23.683 [2024-12-05 12:56:23.352673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.001 ms 00:25:23.683 [2024-12-05 12:56:23.352682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.352863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.352883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:23.683 [2024-12-05 12:56:23.352903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:23.683 [2024-12-05 12:56:23.352915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.352995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.353008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:23.683 [2024-12-05 12:56:23.353022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:23.683 [2024-12-05 12:56:23.353035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.353083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.353098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:23.683 [2024-12-05 12:56:23.353110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:23.683 [2024-12-05 12:56:23.353120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.353167] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:23.683 [2024-12-05 12:56:23.353181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.353189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:23.683 [2024-12-05 12:56:23.353198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:23.683 [2024-12-05 12:56:23.353209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.357169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.357220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:23.683 [2024-12-05 12:56:23.357233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:25:23.683 [2024-12-05 12:56:23.357258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.357383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.683 [2024-12-05 12:56:23.357410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:23.683 [2024-12-05 12:56:23.357421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:23.683 [2024-12-05 12:56:23.357431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.683 [2024-12-05 12:56:23.358406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:23.683 [2024-12-05 12:56:23.359493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.424 ms, result 0 00:25:23.683 [2024-12-05 12:56:23.360061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:23.683 [2024-12-05 12:56:23.369766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:24.618  [2024-12-05T12:56:25.412Z] Copying: 40/256 [MB] (40 MBps) [2024-12-05T12:56:26.783Z] Copying: 72/256 [MB] (31 MBps) [2024-12-05T12:56:27.716Z] Copying: 114/256 [MB] (42 MBps) [2024-12-05T12:56:28.655Z] Copying: 152/256 [MB] (37 MBps) [2024-12-05T12:56:29.613Z] Copying: 174/256 [MB] (21 MBps) [2024-12-05T12:56:30.541Z] Copying: 208/256 [MB] (34 MBps) [2024-12-05T12:56:30.541Z] Copying: 250/256 [MB] (41 MBps) [2024-12-05T12:56:30.541Z] Copying: 256/256 [MB] (average 35 MBps)[2024-12-05 12:56:30.516310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:30.689 [2024-12-05 12:56:30.517787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.517849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:30.689 [2024-12-05 12:56:30.517864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:30.689 [2024-12-05 12:56:30.517873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.517895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:30.689 [2024-12-05 12:56:30.518444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.518475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:30.689 [2024-12-05 12:56:30.518485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:25:30.689 [2024-12-05 12:56:30.518495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.518763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.518786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:30.689 [2024-12-05 12:56:30.518800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:25:30.689 [2024-12-05 12:56:30.518823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.522535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.522561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:30.689 [2024-12-05 12:56:30.522572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:25:30.689 [2024-12-05 12:56:30.522581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.529464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.529504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:30.689 [2024-12-05 12:56:30.529515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.864 ms 00:25:30.689 [2024-12-05 12:56:30.529526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.531104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.531139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:30.689 [2024-12-05 12:56:30.531148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:25:30.689 [2024-12-05 12:56:30.531156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.535037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.535090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:30.689 [2024-12-05 12:56:30.535108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.858 ms 00:25:30.689 [2024-12-05 12:56:30.535120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.535302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.535324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:30.689 [2024-12-05 12:56:30.535337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:30.689 [2024-12-05 12:56:30.535345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.537346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.537381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:30.689 [2024-12-05 12:56:30.537390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:25:30.689 [2024-12-05 12:56:30.537398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.689 [2024-12-05 12:56:30.538623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.689 [2024-12-05 12:56:30.538658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:30.689 [2024-12-05 12:56:30.538673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:25:30.689 [2024-12-05 12:56:30.538683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.947 [2024-12-05 12:56:30.539598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.947 [2024-12-05 12:56:30.539631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:30.947 [2024-12-05 12:56:30.539640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:25:30.947 [2024-12-05 12:56:30.539648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.947 [2024-12-05 12:56:30.540580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.947 [2024-12-05 12:56:30.540612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:30.947 [2024-12-05 12:56:30.540621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:25:30.947 [2024-12-05 12:56:30.540629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.947 [2024-12-05 12:56:30.540648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:30.947 [2024-12-05 12:56:30.540668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:30.947 [2024-12-05 12:56:30.540759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.540997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:30.948 [2024-12-05 12:56:30.541495] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:30.949 [2024-12-05 12:56:30.541503] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bd99ffa-8eea-4244-a9f2-db54184048fa 00:25:30.949 [2024-12-05 12:56:30.541511] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:30.949 [2024-12-05 12:56:30.541519] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:30.949 [2024-12-05 12:56:30.541530] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:30.949 [2024-12-05 12:56:30.541539] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:30.949 [2024-12-05 12:56:30.541546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:30.949 [2024-12-05 12:56:30.541557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:30.949 [2024-12-05 12:56:30.541565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:30.949 [2024-12-05 12:56:30.541571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:30.949 [2024-12-05 12:56:30.541578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:30.949 [2024-12-05 12:56:30.541585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.949 [2024-12-05 12:56:30.541596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:30.949 [2024-12-05 12:56:30.541605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:25:30.949 [2024-12-05 12:56:30.541612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.543665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.949 [2024-12-05 12:56:30.543708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:30.949 [2024-12-05 12:56:30.543725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:25:30.949 [2024-12-05 12:56:30.543747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.543878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.949 [2024-12-05 12:56:30.543901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:30.949 [2024-12-05 12:56:30.543911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:30.949 [2024-12-05 12:56:30.543918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.550459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.550506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.949 [2024-12-05 12:56:30.550522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.550532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.550620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.550630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.949 [2024-12-05 12:56:30.550638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.550646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.550702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.550718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.949 [2024-12-05 12:56:30.550727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.550735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.550756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.550764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.949 [2024-12-05 12:56:30.550772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.550784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.562984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.563056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.949 [2024-12-05 12:56:30.563070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.563093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.572408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.572476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.949 [2024-12-05 12:56:30.572489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.572498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.572543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.572552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.949 [2024-12-05 12:56:30.572560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.572568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.572606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.572615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.949 [2024-12-05 12:56:30.572623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.572637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.572724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.572736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.949 [2024-12-05 12:56:30.572744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.572752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.572786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.572799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:30.949 [2024-12-05 12:56:30.572824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.572832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.572873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.572888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.949 [2024-12-05 12:56:30.572896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.572904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.572959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.949 [2024-12-05 12:56:30.572969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.949 [2024-12-05 12:56:30.572977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.949 [2024-12-05 12:56:30.572985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.949 [2024-12-05 12:56:30.573130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.318 ms, result 0 00:25:31.206 00:25:31.206 00:25:31.207 12:56:30 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:31.207 12:56:30 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:31.814 12:56:31 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:31.814 [2024-12-05 12:56:31.527327] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:25:31.814 [2024-12-05 12:56:31.527482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88094 ] 00:25:32.070 [2024-12-05 12:56:31.690561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.070 [2024-12-05 12:56:31.717347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.070 [2024-12-05 12:56:31.822645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:32.070 [2024-12-05 12:56:31.822731] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:32.327 [2024-12-05 12:56:31.977265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.327 [2024-12-05 12:56:31.977337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:32.327 [2024-12-05 12:56:31.977352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:32.327 [2024-12-05 12:56:31.977367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.327 [2024-12-05 12:56:31.980197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.327 [2024-12-05 12:56:31.980247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.327 [2024-12-05 12:56:31.980259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:25:32.327 [2024-12-05 12:56:31.980268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.327 [2024-12-05 12:56:31.980497] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:32.327 [2024-12-05 12:56:31.980785] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:32.327 [2024-12-05 12:56:31.980828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.327 [2024-12-05 12:56:31.980837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.327 [2024-12-05 12:56:31.980846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:25:32.327 [2024-12-05 12:56:31.980854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.327 [2024-12-05 12:56:31.982312] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:32.327 [2024-12-05 12:56:31.984839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.327 [2024-12-05 12:56:31.984871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:32.327 [2024-12-05 12:56:31.984882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:25:32.327 [2024-12-05 12:56:31.984893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.327 [2024-12-05 12:56:31.985022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.327 [2024-12-05 12:56:31.985034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:32.327 [2024-12-05 12:56:31.985043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:32.327 [2024-12-05 12:56:31.985050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.327 [2024-12-05 12:56:31.991520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.327 [2024-12-05 12:56:31.991566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.327 [2024-12-05 12:56:31.991577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.423 ms 00:25:32.327 [2024-12-05 12:56:31.991592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.327 [2024-12-05 12:56:31.991760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.327 [2024-12-05 12:56:31.991778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.327 [2024-12-05 12:56:31.991788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:32.327 [2024-12-05 12:56:31.991798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.327 [2024-12-05 12:56:31.991849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.327 [2024-12-05 12:56:31.991859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:32.328 [2024-12-05 12:56:31.991868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:32.328 [2024-12-05 12:56:31.991875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:31.991899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:32.328 [2024-12-05 12:56:31.993564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:31.993600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.328 [2024-12-05 12:56:31.993610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.672 ms 00:25:32.328 [2024-12-05 12:56:31.993620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:31.993663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:31.993676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:32.328 [2024-12-05 12:56:31.993684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:32.328 [2024-12-05 12:56:31.993695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:31.993714] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:32.328 [2024-12-05 12:56:31.993736] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:32.328 [2024-12-05 12:56:31.993773] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:32.328 [2024-12-05 12:56:31.993792] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:32.328 [2024-12-05 12:56:31.993937] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:32.328 [2024-12-05 12:56:31.993955] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:32.328 [2024-12-05 12:56:31.993966] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:32.328 [2024-12-05 12:56:31.993977] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:32.328 [2024-12-05 12:56:31.993986] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:32.328 [2024-12-05 12:56:31.993994] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:32.328 [2024-12-05 12:56:31.994002] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:32.328 [2024-12-05 12:56:31.994009] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:32.328 [2024-12-05 12:56:31.994019] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:32.328 [2024-12-05 12:56:31.994033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:31.994040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:32.328 [2024-12-05 12:56:31.994048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:25:32.328 [2024-12-05 12:56:31.994059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:31.994150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:31.994159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:32.328 [2024-12-05 12:56:31.994167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:32.328 [2024-12-05 12:56:31.994175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:31.994280] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:32.328 [2024-12-05 12:56:31.994306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:32.328 [2024-12-05 12:56:31.994316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:32.328 [2024-12-05 12:56:31.994341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:32.328 [2024-12-05 12:56:31.994367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:32.328 [2024-12-05 12:56:31.994383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:32.328 [2024-12-05 12:56:31.994391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:32.328 [2024-12-05 12:56:31.994398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:32.328 [2024-12-05 12:56:31.994406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:32.328 [2024-12-05 12:56:31.994414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:32.328 [2024-12-05 12:56:31.994421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:32.328 [2024-12-05 12:56:31.994438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:32.328 [2024-12-05 12:56:31.994463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:32.328 [2024-12-05 12:56:31.994485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:32.328 [2024-12-05 12:56:31.994511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:32.328 [2024-12-05 12:56:31.994534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:32.328 [2024-12-05 12:56:31.994556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:32.328 [2024-12-05 12:56:31.994572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:32.328 [2024-12-05 12:56:31.994584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:32.328 [2024-12-05 12:56:31.994595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:32.328 [2024-12-05 12:56:31.994606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:32.328 [2024-12-05 12:56:31.994617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:32.328 [2024-12-05 12:56:31.994624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:32.328 [2024-12-05 12:56:31.994641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:32.328 [2024-12-05 12:56:31.994648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994655] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:32.328 [2024-12-05 12:56:31.994663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:32.328 [2024-12-05 12:56:31.994670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.328 [2024-12-05 12:56:31.994685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:32.328 [2024-12-05 12:56:31.994692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:32.328 [2024-12-05 12:56:31.994699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:32.328 [2024-12-05 12:56:31.994705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:32.328 [2024-12-05 12:56:31.994712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:32.328 [2024-12-05 12:56:31.994719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:32.328 [2024-12-05 12:56:31.994730] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:32.328 [2024-12-05 12:56:31.994739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:32.328 [2024-12-05 12:56:31.994753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:32.328 [2024-12-05 12:56:31.994762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:32.328 [2024-12-05 12:56:31.994770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:32.328 [2024-12-05 12:56:31.994781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:32.328 [2024-12-05 12:56:31.994788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:32.328 [2024-12-05 12:56:31.994799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:32.328 [2024-12-05 12:56:31.994824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:32.328 [2024-12-05 12:56:31.994838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:32.328 [2024-12-05 12:56:31.994849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:32.328 [2024-12-05 12:56:31.994856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:32.328 [2024-12-05 12:56:31.994868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:32.328 [2024-12-05 12:56:31.994876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:32.328 [2024-12-05 12:56:31.994887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:32.328 [2024-12-05 12:56:31.994898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:32.328 [2024-12-05 12:56:31.994906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:32.328 [2024-12-05 12:56:31.994917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:32.328 [2024-12-05 12:56:31.994925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:32.328 [2024-12-05 12:56:31.994936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:32.328 [2024-12-05 12:56:31.994943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:32.328 [2024-12-05 12:56:31.994951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:32.328 [2024-12-05 12:56:31.994958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:31.994966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:32.328 [2024-12-05 12:56:31.994973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:25:32.328 [2024-12-05 12:56:31.994981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.006428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.006485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.328 [2024-12-05 12:56:32.006500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.394 ms 00:25:32.328 [2024-12-05 12:56:32.006516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.006683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.006721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:32.328 [2024-12-05 12:56:32.006729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:32.328 [2024-12-05 12:56:32.006737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.024439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.024512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:32.328 [2024-12-05 12:56:32.024529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.675 ms 00:25:32.328 [2024-12-05 12:56:32.024539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.024701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.024730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:32.328 [2024-12-05 12:56:32.024742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:32.328 [2024-12-05 12:56:32.024750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.025195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.025230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:32.328 [2024-12-05 12:56:32.025244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:25:32.328 [2024-12-05 12:56:32.025255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.025439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.025464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:32.328 [2024-12-05 12:56:32.025474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:25:32.328 [2024-12-05 12:56:32.025487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.032725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.032768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.328 [2024-12-05 12:56:32.032780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.212 ms 00:25:32.328 [2024-12-05 12:56:32.032793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.035453] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:32.328 [2024-12-05 12:56:32.035495] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:32.328 [2024-12-05 12:56:32.035512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.035521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:32.328 [2024-12-05 12:56:32.035531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:25:32.328 [2024-12-05 12:56:32.035538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.050235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.050308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:32.328 [2024-12-05 12:56:32.050323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.615 ms 00:25:32.328 [2024-12-05 12:56:32.050332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.053244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.053329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:32.328 [2024-12-05 12:56:32.053342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:25:32.328 [2024-12-05 12:56:32.053350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.054908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.054957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:32.328 [2024-12-05 12:56:32.054970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:25:32.328 [2024-12-05 12:56:32.054979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.055353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.055377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:32.328 [2024-12-05 12:56:32.055392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:25:32.328 [2024-12-05 12:56:32.055401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.073697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.073762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:32.328 [2024-12-05 12:56:32.073778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.270 ms 00:25:32.328 [2024-12-05 12:56:32.073787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.082247] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:32.328 [2024-12-05 12:56:32.100163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.100226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:32.328 [2024-12-05 12:56:32.100242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.193 ms 00:25:32.328 [2024-12-05 12:56:32.100252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.328 [2024-12-05 12:56:32.100386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.328 [2024-12-05 12:56:32.100397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:32.328 [2024-12-05 12:56:32.100416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:32.328 [2024-12-05 12:56:32.100424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.329 [2024-12-05 12:56:32.100483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.329 [2024-12-05 12:56:32.100496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:32.329 [2024-12-05 12:56:32.100504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:32.329 [2024-12-05 12:56:32.100511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.329 [2024-12-05 12:56:32.100538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.329 [2024-12-05 12:56:32.100548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:32.329 [2024-12-05 12:56:32.100556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:32.329 [2024-12-05 12:56:32.100566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.329 [2024-12-05 12:56:32.100604] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:32.329 [2024-12-05 12:56:32.100614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.329 [2024-12-05 12:56:32.100621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:32.329 [2024-12-05 12:56:32.100629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:32.329 [2024-12-05 12:56:32.100640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.329 [2024-12-05 12:56:32.104957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.329 [2024-12-05 12:56:32.105007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:32.329 [2024-12-05 12:56:32.105028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.294 ms 00:25:32.329 [2024-12-05 12:56:32.105041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.329 [2024-12-05 12:56:32.105131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.329 [2024-12-05 12:56:32.105142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:32.329 [2024-12-05 12:56:32.105151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:32.329 [2024-12-05 12:56:32.105159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.329 [2024-12-05 12:56:32.106123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:32.329 [2024-12-05 12:56:32.107209] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.554 ms, result 0 00:25:32.329 [2024-12-05 12:56:32.107692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.329 [2024-12-05 12:56:32.116839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:32.588  [2024-12-05T12:56:32.440Z] Copying: 4096/4096 [kB] (average 38 MBps)[2024-12-05 12:56:32.221626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.588 [2024-12-05 12:56:32.223072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.223114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:32.588 [2024-12-05 12:56:32.223135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:32.588 [2024-12-05 12:56:32.223144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.223167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:32.588 [2024-12-05 12:56:32.223706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.223734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:32.588 [2024-12-05 12:56:32.223743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:25:32.588 [2024-12-05 12:56:32.223751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.225381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.225413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:32.588 [2024-12-05 12:56:32.225427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:25:32.588 [2024-12-05 12:56:32.225436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.229227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.229255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:32.588 [2024-12-05 12:56:32.229280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.774 ms 00:25:32.588 [2024-12-05 12:56:32.229290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.236233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.236297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:32.588 [2024-12-05 12:56:32.236310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:25:32.588 [2024-12-05 12:56:32.236319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.237825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.237863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:32.588 [2024-12-05 12:56:32.237873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:25:32.588 [2024-12-05 12:56:32.237881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.241258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.241308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:32.588 [2024-12-05 12:56:32.241318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.343 ms 00:25:32.588 [2024-12-05 12:56:32.241327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.241460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.241474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:32.588 [2024-12-05 12:56:32.241487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:32.588 [2024-12-05 12:56:32.241501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.243078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.243111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:32.588 [2024-12-05 12:56:32.243121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:25:32.588 [2024-12-05 12:56:32.243129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.244166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.244197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:32.588 [2024-12-05 12:56:32.244207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:25:32.588 [2024-12-05 12:56:32.244215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.245071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.245100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:32.588 [2024-12-05 12:56:32.245109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:25:32.588 [2024-12-05 12:56:32.245116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.246015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.588 [2024-12-05 12:56:32.246045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:32.588 [2024-12-05 12:56:32.246054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:25:32.588 [2024-12-05 12:56:32.246061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.588 [2024-12-05 12:56:32.246090] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:32.588 [2024-12-05 12:56:32.246106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:32.588 [2024-12-05 12:56:32.246845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:32.589 [2024-12-05 12:56:32.246868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:32.589 [2024-12-05 12:56:32.246882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:32.589 [2024-12-05 12:56:32.246890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:32.589 [2024-12-05 12:56:32.246898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:32.589 [2024-12-05 12:56:32.246914] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:32.589 [2024-12-05 12:56:32.246921] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bd99ffa-8eea-4244-a9f2-db54184048fa 00:25:32.589 [2024-12-05 12:56:32.246930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:32.589 [2024-12-05 12:56:32.246937] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:32.589 [2024-12-05 12:56:32.246950] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:32.589 [2024-12-05 12:56:32.246959] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:32.589 [2024-12-05 12:56:32.246966] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:32.589 [2024-12-05 12:56:32.246976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:32.589 [2024-12-05 12:56:32.246984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:32.589 [2024-12-05 12:56:32.246991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:32.589 [2024-12-05 12:56:32.246998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:32.589 [2024-12-05 12:56:32.247005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.589 [2024-12-05 12:56:32.247013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:32.589 [2024-12-05 12:56:32.247022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:25:32.589 [2024-12-05 12:56:32.247030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.248953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.589 [2024-12-05 12:56:32.248979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:32.589 [2024-12-05 12:56:32.248990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:25:32.589 [2024-12-05 12:56:32.249001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.249096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.589 [2024-12-05 12:56:32.249111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:32.589 [2024-12-05 12:56:32.249121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:32.589 [2024-12-05 12:56:32.249129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.255664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.255714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.589 [2024-12-05 12:56:32.255731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.255739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.255825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.255840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:32.589 [2024-12-05 12:56:32.255849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.255856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.255907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.255917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:32.589 [2024-12-05 12:56:32.255924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.255935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.255954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.255962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:32.589 [2024-12-05 12:56:32.255973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.255982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.268068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.268144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:32.589 [2024-12-05 12:56:32.268158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.268171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.277474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.277540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.589 [2024-12-05 12:56:32.277554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.277564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.277647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.277663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.589 [2024-12-05 12:56:32.277675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.277683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.277717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.277726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.589 [2024-12-05 12:56:32.277734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.277742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.277828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.277839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.589 [2024-12-05 12:56:32.277851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.277859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.277889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.277901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:32.589 [2024-12-05 12:56:32.277909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.277917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.277956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.277965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.589 [2024-12-05 12:56:32.277973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.277980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.278032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.589 [2024-12-05 12:56:32.278052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.589 [2024-12-05 12:56:32.278060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.589 [2024-12-05 12:56:32.278069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.589 [2024-12-05 12:56:32.278211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.112 ms, result 0 00:25:33.154 00:25:33.154 00:25:33.154 12:56:32 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=88114 00:25:33.154 12:56:32 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:33.154 12:56:32 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 88114 00:25:33.154 12:56:32 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88114 ']' 00:25:33.154 12:56:32 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.154 12:56:32 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.154 12:56:32 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.154 12:56:32 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.154 12:56:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:33.413 [2024-12-05 12:56:33.053288] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:25:33.413 [2024-12-05 12:56:33.053435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88114 ] 00:25:33.413 [2024-12-05 12:56:33.210920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.413 [2024-12-05 12:56:33.236256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.342 12:56:33 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.342 12:56:33 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:34.342 12:56:33 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:34.342 [2024-12-05 12:56:34.136936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.342 [2024-12-05 12:56:34.137053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.600 [2024-12-05 12:56:34.308400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.600 [2024-12-05 12:56:34.308473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:34.600 [2024-12-05 12:56:34.308489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.600 [2024-12-05 12:56:34.308500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.600 [2024-12-05 12:56:34.311147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.600 [2024-12-05 12:56:34.311203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.600 [2024-12-05 12:56:34.311215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.627 ms 00:25:34.600 [2024-12-05 12:56:34.311226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.600 [2024-12-05 12:56:34.311429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:34.600 [2024-12-05 12:56:34.311712] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:34.600 [2024-12-05 12:56:34.311739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.600 [2024-12-05 12:56:34.311751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.600 [2024-12-05 12:56:34.311761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:25:34.600 [2024-12-05 12:56:34.311770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.600 [2024-12-05 12:56:34.313350] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:34.600 [2024-12-05 12:56:34.316128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.600 [2024-12-05 12:56:34.316175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:34.600 [2024-12-05 12:56:34.316189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:25:34.600 [2024-12-05 12:56:34.316199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.600 [2024-12-05 12:56:34.316312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.601 [2024-12-05 12:56:34.316324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:34.601 [2024-12-05 12:56:34.316337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:34.601 [2024-12-05 12:56:34.316345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.601 [2024-12-05 12:56:34.323016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.601 [2024-12-05 12:56:34.323069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.601 [2024-12-05 12:56:34.323089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.606 ms 00:25:34.601 [2024-12-05 12:56:34.323098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.601 [2024-12-05 12:56:34.323246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.601 [2024-12-05 12:56:34.323257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.601 [2024-12-05 12:56:34.323267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:34.601 [2024-12-05 12:56:34.323278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.601 [2024-12-05 12:56:34.323314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.601 [2024-12-05 12:56:34.323331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:34.601 [2024-12-05 12:56:34.323341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:34.601 [2024-12-05 12:56:34.323348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.601 [2024-12-05 12:56:34.323377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:34.601 [2024-12-05 12:56:34.325113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.601 [2024-12-05 12:56:34.325157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.601 [2024-12-05 12:56:34.325177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:25:34.601 [2024-12-05 12:56:34.325189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.601 [2024-12-05 12:56:34.325254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.601 [2024-12-05 12:56:34.325267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:34.601 [2024-12-05 12:56:34.325288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:34.601 [2024-12-05 12:56:34.325300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.601 [2024-12-05 12:56:34.325330] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:34.601 [2024-12-05 12:56:34.325367] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:34.601 [2024-12-05 12:56:34.325430] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:34.601 [2024-12-05 12:56:34.325458] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:34.601 [2024-12-05 12:56:34.325567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:34.601 [2024-12-05 12:56:34.325587] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:34.601 [2024-12-05 12:56:34.325599] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:34.601 [2024-12-05 12:56:34.325611] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:34.601 [2024-12-05 12:56:34.325620] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:34.601 [2024-12-05 12:56:34.325633] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:34.601 [2024-12-05 12:56:34.325641] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:34.601 [2024-12-05 12:56:34.325650] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:34.601 [2024-12-05 12:56:34.325660] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:34.601 [2024-12-05 12:56:34.325670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.601 [2024-12-05 12:56:34.325677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:34.601 [2024-12-05 12:56:34.325687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:25:34.601 [2024-12-05 12:56:34.325695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.601 [2024-12-05 12:56:34.325784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.601 [2024-12-05 12:56:34.325797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:34.601 [2024-12-05 12:56:34.325836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:34.601 [2024-12-05 12:56:34.325844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.601 [2024-12-05 12:56:34.325956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:34.601 [2024-12-05 12:56:34.325972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:34.601 [2024-12-05 12:56:34.325982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.601 [2024-12-05 12:56:34.325991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:34.601 [2024-12-05 12:56:34.326018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:34.601 [2024-12-05 12:56:34.326034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:34.601 [2024-12-05 12:56:34.326042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.601 [2024-12-05 12:56:34.326058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:34.601 [2024-12-05 12:56:34.326065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:34.601 [2024-12-05 12:56:34.326073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.601 [2024-12-05 12:56:34.326079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:34.601 [2024-12-05 12:56:34.326088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:34.601 [2024-12-05 12:56:34.326094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:34.601 [2024-12-05 12:56:34.326110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:34.601 [2024-12-05 12:56:34.326118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:34.601 [2024-12-05 12:56:34.326134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.601 [2024-12-05 12:56:34.326152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:34.601 [2024-12-05 12:56:34.326159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.601 [2024-12-05 12:56:34.326173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:34.601 [2024-12-05 12:56:34.326182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.601 [2024-12-05 12:56:34.326197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:34.601 [2024-12-05 12:56:34.326203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.601 [2024-12-05 12:56:34.326219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:34.601 [2024-12-05 12:56:34.326228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.601 [2024-12-05 12:56:34.326243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:34.601 [2024-12-05 12:56:34.326249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:34.601 [2024-12-05 12:56:34.326259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.601 [2024-12-05 12:56:34.326266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:34.601 [2024-12-05 12:56:34.326274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:34.601 [2024-12-05 12:56:34.326281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:34.601 [2024-12-05 12:56:34.326294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:34.601 [2024-12-05 12:56:34.326302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326309] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:34.601 [2024-12-05 12:56:34.326318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:34.601 [2024-12-05 12:56:34.326325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.601 [2024-12-05 12:56:34.326334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.601 [2024-12-05 12:56:34.326342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:34.601 [2024-12-05 12:56:34.326350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:34.601 [2024-12-05 12:56:34.326357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:34.601 [2024-12-05 12:56:34.326365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:34.601 [2024-12-05 12:56:34.326371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:34.601 [2024-12-05 12:56:34.326382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:34.601 [2024-12-05 12:56:34.326392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:34.601 [2024-12-05 12:56:34.326404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.601 [2024-12-05 12:56:34.326420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:34.601 [2024-12-05 12:56:34.326429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:34.601 [2024-12-05 12:56:34.326436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:34.602 [2024-12-05 12:56:34.326445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:34.602 [2024-12-05 12:56:34.326452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:34.602 [2024-12-05 12:56:34.326461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:34.602 [2024-12-05 12:56:34.326468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:34.602 [2024-12-05 12:56:34.326477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:34.602 [2024-12-05 12:56:34.326484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:34.602 [2024-12-05 12:56:34.326493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:34.602 [2024-12-05 12:56:34.326500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:34.602 [2024-12-05 12:56:34.326509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:34.602 [2024-12-05 12:56:34.326516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:34.602 [2024-12-05 12:56:34.326527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:34.602 [2024-12-05 12:56:34.326534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:34.602 [2024-12-05 12:56:34.326544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.602 [2024-12-05 12:56:34.326552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:34.602 [2024-12-05 12:56:34.326561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:34.602 [2024-12-05 12:56:34.326568] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:34.602 [2024-12-05 12:56:34.326577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:34.602 [2024-12-05 12:56:34.326585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.326595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:34.602 [2024-12-05 12:56:34.326603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:25:34.602 [2024-12-05 12:56:34.326611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.338117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.338171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.602 [2024-12-05 12:56:34.338184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.428 ms 00:25:34.602 [2024-12-05 12:56:34.338195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.338359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.338384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:34.602 [2024-12-05 12:56:34.338393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:34.602 [2024-12-05 12:56:34.338408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.348994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.349048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.602 [2024-12-05 12:56:34.349061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.559 ms 00:25:34.602 [2024-12-05 12:56:34.349081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.349184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.349196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.602 [2024-12-05 12:56:34.349205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:34.602 [2024-12-05 12:56:34.349215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.349630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.349668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.602 [2024-12-05 12:56:34.349679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:25:34.602 [2024-12-05 12:56:34.349690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.349872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.349894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.602 [2024-12-05 12:56:34.349903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:25:34.602 [2024-12-05 12:56:34.349914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.356657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.356712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.602 [2024-12-05 12:56:34.356723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.719 ms 00:25:34.602 [2024-12-05 12:56:34.356733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.376195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:34.602 [2024-12-05 12:56:34.376274] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:34.602 [2024-12-05 12:56:34.376294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.376309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:34.602 [2024-12-05 12:56:34.376324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.424 ms 00:25:34.602 [2024-12-05 12:56:34.376337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.396796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.396890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:34.602 [2024-12-05 12:56:34.396907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.338 ms 00:25:34.602 [2024-12-05 12:56:34.396922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.399682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.399739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:34.602 [2024-12-05 12:56:34.399750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:25:34.602 [2024-12-05 12:56:34.399761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.401187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.401228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:34.602 [2024-12-05 12:56:34.401238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:25:34.602 [2024-12-05 12:56:34.401247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.401635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.401666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:34.602 [2024-12-05 12:56:34.401676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:25:34.602 [2024-12-05 12:56:34.401685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.419715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.419795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:34.602 [2024-12-05 12:56:34.419820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.004 ms 00:25:34.602 [2024-12-05 12:56:34.419834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.428624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:34.602 [2024-12-05 12:56:34.446526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.446592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:34.602 [2024-12-05 12:56:34.446609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.547 ms 00:25:34.602 [2024-12-05 12:56:34.446618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.446743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.446757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:34.602 [2024-12-05 12:56:34.446768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:34.602 [2024-12-05 12:56:34.446782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.446853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.446863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:34.602 [2024-12-05 12:56:34.446873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:34.602 [2024-12-05 12:56:34.446880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.446905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.446919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:34.602 [2024-12-05 12:56:34.446937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.602 [2024-12-05 12:56:34.446944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.602 [2024-12-05 12:56:34.446982] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:34.602 [2024-12-05 12:56:34.446993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.602 [2024-12-05 12:56:34.447003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:34.602 [2024-12-05 12:56:34.447011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:34.602 [2024-12-05 12:56:34.447021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.860 [2024-12-05 12:56:34.450949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.860 [2024-12-05 12:56:34.451006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:34.860 [2024-12-05 12:56:34.451019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.904 ms 00:25:34.860 [2024-12-05 12:56:34.451032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.860 [2024-12-05 12:56:34.451139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.860 [2024-12-05 12:56:34.451152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:34.860 [2024-12-05 12:56:34.451161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:34.860 [2024-12-05 12:56:34.451171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.860 [2024-12-05 12:56:34.452180] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:34.860 [2024-12-05 12:56:34.453223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 143.496 ms, result 0 00:25:34.860 [2024-12-05 12:56:34.454242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:34.860 Some configs were skipped because the RPC state that can call them passed over. 00:25:34.860 12:56:34 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:34.860 [2024-12-05 12:56:34.680864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.860 [2024-12-05 12:56:34.680944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:34.860 [2024-12-05 12:56:34.680961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:25:34.860 [2024-12-05 12:56:34.680970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.860 [2024-12-05 12:56:34.681007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.443 ms, result 0 00:25:34.860 true 00:25:34.860 12:56:34 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:35.117 [2024-12-05 12:56:34.888681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.117 [2024-12-05 12:56:34.888749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:35.117 [2024-12-05 12:56:34.888763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:25:35.117 [2024-12-05 12:56:34.888773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.118 [2024-12-05 12:56:34.888825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 0.949 ms, result 0 00:25:35.118 true 00:25:35.118 12:56:34 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 88114 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88114 ']' 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88114 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88114 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.118 killing process with pid 88114 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88114' 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88114 00:25:35.118 12:56:34 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88114 00:25:35.377 [2024-12-05 12:56:35.058235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.058308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:35.377 [2024-12-05 12:56:35.058329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.377 [2024-12-05 12:56:35.058342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.058371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:35.377 [2024-12-05 12:56:35.058967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.059003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:35.377 [2024-12-05 12:56:35.059013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:25:35.377 [2024-12-05 12:56:35.059022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.059320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.059342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:35.377 [2024-12-05 12:56:35.059352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:25:35.377 [2024-12-05 12:56:35.059362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.063404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.063447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:35.377 [2024-12-05 12:56:35.063459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.023 ms 00:25:35.377 [2024-12-05 12:56:35.063472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.070577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.070632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:35.377 [2024-12-05 12:56:35.070642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.070 ms 00:25:35.377 [2024-12-05 12:56:35.070655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.072318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.072363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:35.377 [2024-12-05 12:56:35.072372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:25:35.377 [2024-12-05 12:56:35.072382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.075634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.075677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:35.377 [2024-12-05 12:56:35.075690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:25:35.377 [2024-12-05 12:56:35.075700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.075834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.075845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:35.377 [2024-12-05 12:56:35.075855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:25:35.377 [2024-12-05 12:56:35.075864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.077421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.077466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:35.377 [2024-12-05 12:56:35.077474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:25:35.377 [2024-12-05 12:56:35.077487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.078717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.078757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:35.377 [2024-12-05 12:56:35.078766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:25:35.377 [2024-12-05 12:56:35.078776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.079745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.079784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:35.377 [2024-12-05 12:56:35.079794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:25:35.377 [2024-12-05 12:56:35.079803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.080631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.377 [2024-12-05 12:56:35.080665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:35.377 [2024-12-05 12:56:35.080674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:25:35.377 [2024-12-05 12:56:35.080683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.377 [2024-12-05 12:56:35.080735] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:35.377 [2024-12-05 12:56:35.080754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.080990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:35.377 [2024-12-05 12:56:35.081105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:35.378 [2024-12-05 12:56:35.081685] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:35.378 [2024-12-05 12:56:35.081693] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bd99ffa-8eea-4244-a9f2-db54184048fa 00:25:35.378 [2024-12-05 12:56:35.081706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:35.378 [2024-12-05 12:56:35.081714] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:35.378 [2024-12-05 12:56:35.081723] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:35.378 [2024-12-05 12:56:35.081731] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:35.378 [2024-12-05 12:56:35.081741] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:35.378 [2024-12-05 12:56:35.081751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:35.378 [2024-12-05 12:56:35.081760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:35.378 [2024-12-05 12:56:35.081766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:35.378 [2024-12-05 12:56:35.081775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:35.378 [2024-12-05 12:56:35.081782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.378 [2024-12-05 12:56:35.081792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:35.378 [2024-12-05 12:56:35.081800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:25:35.378 [2024-12-05 12:56:35.081831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.378 [2024-12-05 12:56:35.083711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.378 [2024-12-05 12:56:35.083742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:35.378 [2024-12-05 12:56:35.083754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.859 ms 00:25:35.378 [2024-12-05 12:56:35.083765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.378 [2024-12-05 12:56:35.083890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.378 [2024-12-05 12:56:35.083903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:35.378 [2024-12-05 12:56:35.083918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:35.378 [2024-12-05 12:56:35.083929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.378 [2024-12-05 12:56:35.090500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.378 [2024-12-05 12:56:35.090558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.378 [2024-12-05 12:56:35.090570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.378 [2024-12-05 12:56:35.090580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.378 [2024-12-05 12:56:35.090675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.378 [2024-12-05 12:56:35.090686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.378 [2024-12-05 12:56:35.090694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.378 [2024-12-05 12:56:35.090706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.378 [2024-12-05 12:56:35.090777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.378 [2024-12-05 12:56:35.090795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.379 [2024-12-05 12:56:35.090803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.090860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.090880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.090890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.379 [2024-12-05 12:56:35.090898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.090907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.103536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.103616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.379 [2024-12-05 12:56:35.103629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.103639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.113347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.113424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.379 [2024-12-05 12:56:35.113437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.113450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.113524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.113535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.379 [2024-12-05 12:56:35.113544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.113553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.113586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.113603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.379 [2024-12-05 12:56:35.113611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.113620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.113694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.113709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.379 [2024-12-05 12:56:35.113716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.113727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.113779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.113790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:35.379 [2024-12-05 12:56:35.113799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.113823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.113865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.113883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.379 [2024-12-05 12:56:35.113891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.113900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.113947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:35.379 [2024-12-05 12:56:35.113963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.379 [2024-12-05 12:56:35.113973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:35.379 [2024-12-05 12:56:35.113983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.379 [2024-12-05 12:56:35.114136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.873 ms, result 0 00:25:35.942 12:56:35 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:35.942 [2024-12-05 12:56:35.679275] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:25:35.943 [2024-12-05 12:56:35.679425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88155 ] 00:25:36.200 [2024-12-05 12:56:35.839485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.200 [2024-12-05 12:56:35.865799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.200 [2024-12-05 12:56:35.968839] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.200 [2024-12-05 12:56:35.969097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.478 [2024-12-05 12:56:36.119879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.119948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.478 [2024-12-05 12:56:36.119961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.478 [2024-12-05 12:56:36.119968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.121975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.122009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.478 [2024-12-05 12:56:36.122017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:25:36.478 [2024-12-05 12:56:36.122023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.122086] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.478 [2024-12-05 12:56:36.122295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.478 [2024-12-05 12:56:36.122309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.122315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.478 [2024-12-05 12:56:36.122325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:25:36.478 [2024-12-05 12:56:36.122331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.123651] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:36.478 [2024-12-05 12:56:36.126094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.126123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:36.478 [2024-12-05 12:56:36.126132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:25:36.478 [2024-12-05 12:56:36.126145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.126200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.126209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:36.478 [2024-12-05 12:56:36.126216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:36.478 [2024-12-05 12:56:36.126222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.132308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.132338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.478 [2024-12-05 12:56:36.132348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.050 ms 00:25:36.478 [2024-12-05 12:56:36.132354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.132472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.132481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.478 [2024-12-05 12:56:36.132492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:36.478 [2024-12-05 12:56:36.132501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.132528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.132536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.478 [2024-12-05 12:56:36.132542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:36.478 [2024-12-05 12:56:36.132549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.132568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:36.478 [2024-12-05 12:56:36.134143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.134264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.478 [2024-12-05 12:56:36.134277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:25:36.478 [2024-12-05 12:56:36.134288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.134332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.134342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.478 [2024-12-05 12:56:36.134349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:36.478 [2024-12-05 12:56:36.134356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.134377] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:36.478 [2024-12-05 12:56:36.134396] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:36.478 [2024-12-05 12:56:36.134427] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:36.478 [2024-12-05 12:56:36.134443] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:36.478 [2024-12-05 12:56:36.134531] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:36.478 [2024-12-05 12:56:36.134541] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.478 [2024-12-05 12:56:36.134549] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:36.478 [2024-12-05 12:56:36.134558] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.478 [2024-12-05 12:56:36.134566] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.478 [2024-12-05 12:56:36.134572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:36.478 [2024-12-05 12:56:36.134579] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.478 [2024-12-05 12:56:36.134589] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:36.478 [2024-12-05 12:56:36.134597] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:36.478 [2024-12-05 12:56:36.134605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.134612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.478 [2024-12-05 12:56:36.134618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:25:36.478 [2024-12-05 12:56:36.134624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.134697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.478 [2024-12-05 12:56:36.134704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.478 [2024-12-05 12:56:36.134710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:36.478 [2024-12-05 12:56:36.134717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.478 [2024-12-05 12:56:36.134801] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.478 [2024-12-05 12:56:36.134826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.478 [2024-12-05 12:56:36.134833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.478 [2024-12-05 12:56:36.134840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.478 [2024-12-05 12:56:36.134850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.478 [2024-12-05 12:56:36.134857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.478 [2024-12-05 12:56:36.134863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:36.478 [2024-12-05 12:56:36.134870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.478 [2024-12-05 12:56:36.134881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:36.478 [2024-12-05 12:56:36.134888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.478 [2024-12-05 12:56:36.134899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.478 [2024-12-05 12:56:36.134905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:36.478 [2024-12-05 12:56:36.134912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.478 [2024-12-05 12:56:36.134918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.478 [2024-12-05 12:56:36.134925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:36.478 [2024-12-05 12:56:36.134931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.478 [2024-12-05 12:56:36.134937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.478 [2024-12-05 12:56:36.134943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:36.478 [2024-12-05 12:56:36.134951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.478 [2024-12-05 12:56:36.134958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.478 [2024-12-05 12:56:36.134964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:36.478 [2024-12-05 12:56:36.134971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.478 [2024-12-05 12:56:36.134977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.478 [2024-12-05 12:56:36.134983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:36.478 [2024-12-05 12:56:36.134994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.478 [2024-12-05 12:56:36.135000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.478 [2024-12-05 12:56:36.135006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:36.478 [2024-12-05 12:56:36.135013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.478 [2024-12-05 12:56:36.135019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.478 [2024-12-05 12:56:36.135025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:36.478 [2024-12-05 12:56:36.135032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.479 [2024-12-05 12:56:36.135038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.479 [2024-12-05 12:56:36.135045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:36.479 [2024-12-05 12:56:36.135051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.479 [2024-12-05 12:56:36.135057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.479 [2024-12-05 12:56:36.135063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:36.479 [2024-12-05 12:56:36.135070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.479 [2024-12-05 12:56:36.135076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:36.479 [2024-12-05 12:56:36.135082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:36.479 [2024-12-05 12:56:36.135087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.479 [2024-12-05 12:56:36.135095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:36.479 [2024-12-05 12:56:36.135101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:36.479 [2024-12-05 12:56:36.135107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.479 [2024-12-05 12:56:36.135113] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.479 [2024-12-05 12:56:36.135121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.479 [2024-12-05 12:56:36.135127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.479 [2024-12-05 12:56:36.135134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.479 [2024-12-05 12:56:36.135142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.479 [2024-12-05 12:56:36.135148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.479 [2024-12-05 12:56:36.135155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.479 [2024-12-05 12:56:36.135162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.479 [2024-12-05 12:56:36.135168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.479 [2024-12-05 12:56:36.135175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.479 [2024-12-05 12:56:36.135183] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.479 [2024-12-05 12:56:36.135194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.479 [2024-12-05 12:56:36.135205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:36.479 [2024-12-05 12:56:36.135214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:36.479 [2024-12-05 12:56:36.135220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:36.479 [2024-12-05 12:56:36.135227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:36.479 [2024-12-05 12:56:36.135233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:36.479 [2024-12-05 12:56:36.135239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:36.479 [2024-12-05 12:56:36.135245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:36.479 [2024-12-05 12:56:36.135251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:36.479 [2024-12-05 12:56:36.135256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:36.479 [2024-12-05 12:56:36.135261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:36.479 [2024-12-05 12:56:36.135266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:36.479 [2024-12-05 12:56:36.135271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:36.479 [2024-12-05 12:56:36.135277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:36.479 [2024-12-05 12:56:36.135282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:36.479 [2024-12-05 12:56:36.135287] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.479 [2024-12-05 12:56:36.135295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.479 [2024-12-05 12:56:36.135301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.479 [2024-12-05 12:56:36.135308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.479 [2024-12-05 12:56:36.135314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.479 [2024-12-05 12:56:36.135319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.479 [2024-12-05 12:56:36.135325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.135330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.479 [2024-12-05 12:56:36.135336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:25:36.479 [2024-12-05 12:56:36.135342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.146405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.146438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.479 [2024-12-05 12:56:36.146450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.022 ms 00:25:36.479 [2024-12-05 12:56:36.146456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.146575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.146583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.479 [2024-12-05 12:56:36.146590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:36.479 [2024-12-05 12:56:36.146596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.169697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.169796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.479 [2024-12-05 12:56:36.169880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.077 ms 00:25:36.479 [2024-12-05 12:56:36.169921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.170128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.170176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.479 [2024-12-05 12:56:36.170206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:36.479 [2024-12-05 12:56:36.170233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.170876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.170936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.479 [2024-12-05 12:56:36.170967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:25:36.479 [2024-12-05 12:56:36.170993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.171277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.171319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.479 [2024-12-05 12:56:36.171337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:25:36.479 [2024-12-05 12:56:36.171352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.178802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.178849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.479 [2024-12-05 12:56:36.178858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.409 ms 00:25:36.479 [2024-12-05 12:56:36.178872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.181250] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:36.479 [2024-12-05 12:56:36.181292] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.479 [2024-12-05 12:56:36.181303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.181311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.479 [2024-12-05 12:56:36.181320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:25:36.479 [2024-12-05 12:56:36.181326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.192743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.192817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.479 [2024-12-05 12:56:36.192829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.367 ms 00:25:36.479 [2024-12-05 12:56:36.192837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.195136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.195176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.479 [2024-12-05 12:56:36.195185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:25:36.479 [2024-12-05 12:56:36.195191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.196472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.196510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.479 [2024-12-05 12:56:36.196518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:25:36.479 [2024-12-05 12:56:36.196525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.196802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.196821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.479 [2024-12-05 12:56:36.196833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:25:36.479 [2024-12-05 12:56:36.196843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.479 [2024-12-05 12:56:36.213496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.479 [2024-12-05 12:56:36.213566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.479 [2024-12-05 12:56:36.213579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.627 ms 00:25:36.480 [2024-12-05 12:56:36.213587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.480 [2024-12-05 12:56:36.220018] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:36.480 [2024-12-05 12:56:36.235992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.480 [2024-12-05 12:56:36.236048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.480 [2024-12-05 12:56:36.236070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.286 ms 00:25:36.480 [2024-12-05 12:56:36.236078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.480 [2024-12-05 12:56:36.236191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.480 [2024-12-05 12:56:36.236201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.480 [2024-12-05 12:56:36.236213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:36.480 [2024-12-05 12:56:36.236219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.480 [2024-12-05 12:56:36.236269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.480 [2024-12-05 12:56:36.236276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.480 [2024-12-05 12:56:36.236283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:36.480 [2024-12-05 12:56:36.236289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.480 [2024-12-05 12:56:36.236314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.480 [2024-12-05 12:56:36.236321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.480 [2024-12-05 12:56:36.236328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:36.480 [2024-12-05 12:56:36.236336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.480 [2024-12-05 12:56:36.236368] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.480 [2024-12-05 12:56:36.236377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.480 [2024-12-05 12:56:36.236384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.480 [2024-12-05 12:56:36.236390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:36.480 [2024-12-05 12:56:36.236397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.480 [2024-12-05 12:56:36.239906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.480 [2024-12-05 12:56:36.240055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.480 [2024-12-05 12:56:36.240070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.492 ms 00:25:36.480 [2024-12-05 12:56:36.240083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.480 [2024-12-05 12:56:36.240175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.480 [2024-12-05 12:56:36.240185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.480 [2024-12-05 12:56:36.240193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:36.480 [2024-12-05 12:56:36.240200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.480 [2024-12-05 12:56:36.241019] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.480 [2024-12-05 12:56:36.241949] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 120.870 ms, result 0 00:25:36.480 [2024-12-05 12:56:36.242422] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:36.480 [2024-12-05 12:56:36.252494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.848  [2024-12-05T12:56:38.633Z] Copying: 45/256 [MB] (45 MBps) [2024-12-05T12:56:39.566Z] Copying: 87/256 [MB] (42 MBps) [2024-12-05T12:56:40.524Z] Copying: 131/256 [MB] (43 MBps) [2024-12-05T12:56:41.458Z] Copying: 174/256 [MB] (43 MBps) [2024-12-05T12:56:42.391Z] Copying: 217/256 [MB] (42 MBps) [2024-12-05T12:56:42.651Z] Copying: 256/256 [MB] (average 42 MBps)[2024-12-05 12:56:42.572642] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.799 [2024-12-05 12:56:42.575185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.575329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:42.799 [2024-12-05 12:56:42.575357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:42.799 [2024-12-05 12:56:42.575376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.575426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:42.799 [2024-12-05 12:56:42.576217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.576482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:42.799 [2024-12-05 12:56:42.576520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:25:42.799 [2024-12-05 12:56:42.576548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.577201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.577237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:42.799 [2024-12-05 12:56:42.577263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:25:42.799 [2024-12-05 12:56:42.577279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.585874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.585920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:42.799 [2024-12-05 12:56:42.585934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.533 ms 00:25:42.799 [2024-12-05 12:56:42.585943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.593007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.593074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:42.799 [2024-12-05 12:56:42.593086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.001 ms 00:25:42.799 [2024-12-05 12:56:42.593103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.594925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.595081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:42.799 [2024-12-05 12:56:42.595099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:25:42.799 [2024-12-05 12:56:42.595108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.598385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.598527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:42.799 [2024-12-05 12:56:42.598547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:25:42.799 [2024-12-05 12:56:42.598555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.598695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.598707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:42.799 [2024-12-05 12:56:42.598726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:42.799 [2024-12-05 12:56:42.598735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.600270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.600301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:42.799 [2024-12-05 12:56:42.600310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:25:42.799 [2024-12-05 12:56:42.600318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.601429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.601464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:42.799 [2024-12-05 12:56:42.601473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:25:42.799 [2024-12-05 12:56:42.601480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.602419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.602454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:42.799 [2024-12-05 12:56:42.602463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:25:42.799 [2024-12-05 12:56:42.602470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.603394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.799 [2024-12-05 12:56:42.603429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:42.799 [2024-12-05 12:56:42.603438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:25:42.799 [2024-12-05 12:56:42.603446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.799 [2024-12-05 12:56:42.603465] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:42.799 [2024-12-05 12:56:42.603480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:42.799 [2024-12-05 12:56:42.603854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.603993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:42.800 [2024-12-05 12:56:42.604328] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:42.800 [2024-12-05 12:56:42.604338] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bd99ffa-8eea-4244-a9f2-db54184048fa 00:25:42.800 [2024-12-05 12:56:42.604347] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:42.800 [2024-12-05 12:56:42.604355] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:42.800 [2024-12-05 12:56:42.604367] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:42.800 [2024-12-05 12:56:42.604380] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:42.800 [2024-12-05 12:56:42.604398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:42.800 [2024-12-05 12:56:42.604411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:42.800 [2024-12-05 12:56:42.604419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:42.800 [2024-12-05 12:56:42.604426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:42.800 [2024-12-05 12:56:42.604433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:42.800 [2024-12-05 12:56:42.604440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.800 [2024-12-05 12:56:42.604448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:42.800 [2024-12-05 12:56:42.604456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:25:42.800 [2024-12-05 12:56:42.604464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.800 [2024-12-05 12:56:42.606355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.800 [2024-12-05 12:56:42.606385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:42.800 [2024-12-05 12:56:42.606395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:25:42.800 [2024-12-05 12:56:42.606407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.800 [2024-12-05 12:56:42.606531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.800 [2024-12-05 12:56:42.606541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:42.800 [2024-12-05 12:56:42.606550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:42.800 [2024-12-05 12:56:42.606558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.800 [2024-12-05 12:56:42.612851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.800 [2024-12-05 12:56:42.612899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.800 [2024-12-05 12:56:42.612918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.800 [2024-12-05 12:56:42.612926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.800 [2024-12-05 12:56:42.612998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.800 [2024-12-05 12:56:42.613012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.800 [2024-12-05 12:56:42.613021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.800 [2024-12-05 12:56:42.613028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.800 [2024-12-05 12:56:42.613074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.800 [2024-12-05 12:56:42.613084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.800 [2024-12-05 12:56:42.613091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.800 [2024-12-05 12:56:42.613099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.800 [2024-12-05 12:56:42.613119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.800 [2024-12-05 12:56:42.613128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.800 [2024-12-05 12:56:42.613135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.800 [2024-12-05 12:56:42.613143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.800 [2024-12-05 12:56:42.625165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.800 [2024-12-05 12:56:42.625227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.800 [2024-12-05 12:56:42.625239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.800 [2024-12-05 12:56:42.625255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.800 [2024-12-05 12:56:42.634346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.800 [2024-12-05 12:56:42.634411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.800 [2024-12-05 12:56:42.634423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.801 [2024-12-05 12:56:42.634431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.801 [2024-12-05 12:56:42.634473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.801 [2024-12-05 12:56:42.634483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.801 [2024-12-05 12:56:42.634491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.801 [2024-12-05 12:56:42.634500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.801 [2024-12-05 12:56:42.634538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.801 [2024-12-05 12:56:42.634547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.801 [2024-12-05 12:56:42.634556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.801 [2024-12-05 12:56:42.634564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.801 [2024-12-05 12:56:42.634638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.801 [2024-12-05 12:56:42.634654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.801 [2024-12-05 12:56:42.634662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.801 [2024-12-05 12:56:42.634670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.801 [2024-12-05 12:56:42.634704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.801 [2024-12-05 12:56:42.634716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:42.801 [2024-12-05 12:56:42.634727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.801 [2024-12-05 12:56:42.634735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.801 [2024-12-05 12:56:42.634780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.801 [2024-12-05 12:56:42.634789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.801 [2024-12-05 12:56:42.634797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.801 [2024-12-05 12:56:42.634826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.801 [2024-12-05 12:56:42.634879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.801 [2024-12-05 12:56:42.634890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.801 [2024-12-05 12:56:42.634902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.801 [2024-12-05 12:56:42.634913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.801 [2024-12-05 12:56:42.635058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.876 ms, result 0 00:25:43.058 00:25:43.058 00:25:43.058 12:56:42 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:43.625 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:25:43.625 12:56:43 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:25:43.625 12:56:43 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:25:43.625 12:56:43 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:43.625 12:56:43 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:43.625 12:56:43 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:25:43.625 12:56:43 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:43.625 12:56:43 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 88114 00:25:43.625 12:56:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88114 ']' 00:25:43.625 12:56:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88114 00:25:43.625 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88114) - No such process 00:25:43.625 Process with pid 88114 is not found 00:25:43.625 12:56:43 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 88114 is not found' 00:25:43.625 00:25:43.625 real 0m46.462s 00:25:43.625 user 1m14.144s 00:25:43.625 sys 0m5.398s 00:25:43.625 12:56:43 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.625 12:56:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:43.625 ************************************ 00:25:43.625 END TEST ftl_trim 00:25:43.625 ************************************ 00:25:43.883 12:56:43 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:43.883 12:56:43 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:43.883 12:56:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.883 12:56:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:43.883 ************************************ 00:25:43.883 START TEST ftl_restore 00:25:43.883 ************************************ 00:25:43.883 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:43.883 * Looking for test storage... 00:25:43.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.884 12:56:43 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.884 --rc genhtml_branch_coverage=1 00:25:43.884 --rc genhtml_function_coverage=1 00:25:43.884 --rc genhtml_legend=1 00:25:43.884 --rc geninfo_all_blocks=1 00:25:43.884 --rc geninfo_unexecuted_blocks=1 00:25:43.884 00:25:43.884 ' 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.884 --rc genhtml_branch_coverage=1 00:25:43.884 --rc genhtml_function_coverage=1 00:25:43.884 --rc genhtml_legend=1 00:25:43.884 --rc geninfo_all_blocks=1 00:25:43.884 --rc geninfo_unexecuted_blocks=1 00:25:43.884 00:25:43.884 ' 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.884 --rc genhtml_branch_coverage=1 00:25:43.884 --rc genhtml_function_coverage=1 00:25:43.884 --rc genhtml_legend=1 00:25:43.884 --rc geninfo_all_blocks=1 00:25:43.884 --rc geninfo_unexecuted_blocks=1 00:25:43.884 00:25:43.884 ' 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.884 --rc genhtml_branch_coverage=1 00:25:43.884 --rc genhtml_function_coverage=1 00:25:43.884 --rc genhtml_legend=1 00:25:43.884 --rc geninfo_all_blocks=1 00:25:43.884 --rc geninfo_unexecuted_blocks=1 00:25:43.884 00:25:43.884 ' 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:25:43.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.GVOgzow8rm 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88305 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88305 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88305 ']' 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.884 12:56:43 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.884 12:56:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:44.141 [2024-12-05 12:56:43.743424] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:25:44.141 [2024-12-05 12:56:43.743573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88305 ] 00:25:44.141 [2024-12-05 12:56:43.904620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.141 [2024-12-05 12:56:43.929088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.073 12:56:44 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.073 12:56:44 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:25:45.073 12:56:44 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:45.073 12:56:44 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:25:45.073 12:56:44 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:45.073 12:56:44 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:25:45.073 12:56:44 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:25:45.073 12:56:44 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:45.330 12:56:44 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:45.330 12:56:44 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:25:45.330 12:56:44 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:45.330 12:56:44 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:45.330 12:56:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:45.330 12:56:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:45.330 12:56:44 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:45.330 12:56:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:45.330 12:56:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:45.330 { 00:25:45.330 "name": "nvme0n1", 00:25:45.330 "aliases": [ 00:25:45.330 "6cdad130-e993-4608-beb6-3d0c1755ee65" 00:25:45.330 ], 00:25:45.330 "product_name": "NVMe disk", 00:25:45.330 "block_size": 4096, 00:25:45.330 "num_blocks": 1310720, 00:25:45.330 "uuid": "6cdad130-e993-4608-beb6-3d0c1755ee65", 00:25:45.330 "numa_id": -1, 00:25:45.330 "assigned_rate_limits": { 00:25:45.330 "rw_ios_per_sec": 0, 00:25:45.330 "rw_mbytes_per_sec": 0, 00:25:45.330 "r_mbytes_per_sec": 0, 00:25:45.330 "w_mbytes_per_sec": 0 00:25:45.330 }, 00:25:45.330 "claimed": true, 00:25:45.330 "claim_type": "read_many_write_one", 00:25:45.330 "zoned": false, 00:25:45.330 "supported_io_types": { 00:25:45.330 "read": true, 00:25:45.330 "write": true, 00:25:45.330 "unmap": true, 00:25:45.330 "flush": true, 00:25:45.330 "reset": true, 00:25:45.330 "nvme_admin": true, 00:25:45.330 "nvme_io": true, 00:25:45.330 "nvme_io_md": false, 00:25:45.330 "write_zeroes": true, 00:25:45.330 "zcopy": false, 00:25:45.330 "get_zone_info": false, 00:25:45.330 "zone_management": false, 00:25:45.330 "zone_append": false, 00:25:45.330 "compare": true, 00:25:45.330 "compare_and_write": false, 00:25:45.330 "abort": true, 00:25:45.330 "seek_hole": false, 00:25:45.330 "seek_data": false, 00:25:45.330 "copy": true, 00:25:45.330 "nvme_iov_md": false 00:25:45.330 }, 00:25:45.330 "driver_specific": { 00:25:45.330 "nvme": [ 00:25:45.330 { 00:25:45.330 "pci_address": "0000:00:11.0", 00:25:45.330 "trid": { 00:25:45.330 "trtype": "PCIe", 00:25:45.330 "traddr": "0000:00:11.0" 00:25:45.330 }, 00:25:45.330 "ctrlr_data": { 00:25:45.330 "cntlid": 0, 00:25:45.330 "vendor_id": "0x1b36", 00:25:45.330 "model_number": "QEMU NVMe Ctrl", 00:25:45.331 "serial_number": "12341", 00:25:45.331 "firmware_revision": "8.0.0", 00:25:45.331 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:45.331 "oacs": { 00:25:45.331 "security": 0, 00:25:45.331 "format": 1, 00:25:45.331 "firmware": 0, 00:25:45.331 "ns_manage": 1 00:25:45.331 }, 00:25:45.331 "multi_ctrlr": false, 00:25:45.331 "ana_reporting": false 00:25:45.331 }, 00:25:45.331 "vs": { 00:25:45.331 "nvme_version": "1.4" 00:25:45.331 }, 00:25:45.331 "ns_data": { 00:25:45.331 "id": 1, 00:25:45.331 "can_share": false 00:25:45.331 } 00:25:45.331 } 00:25:45.331 ], 00:25:45.331 "mp_policy": "active_passive" 00:25:45.331 } 00:25:45.331 } 00:25:45.331 ]' 00:25:45.331 12:56:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:45.587 12:56:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:45.587 12:56:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:45.587 12:56:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:45.587 12:56:45 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:45.587 12:56:45 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:25:45.587 12:56:45 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:25:45.587 12:56:45 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:45.587 12:56:45 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:25:45.587 12:56:45 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:45.587 12:56:45 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:45.587 12:56:45 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=adae6a3d-7946-4efe-b75e-4c8178582b7c 00:25:45.587 12:56:45 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:25:45.587 12:56:45 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u adae6a3d-7946-4efe-b75e-4c8178582b7c 00:25:45.844 12:56:45 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:46.101 12:56:45 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=0bbf29d8-faad-430a-b7ff-ec3f3421fdd8 00:25:46.101 12:56:45 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0bbf29d8-faad-430a-b7ff-ec3f3421fdd8 00:25:46.357 12:56:46 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:46.357 12:56:46 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:46.357 12:56:46 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:46.357 12:56:46 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:25:46.357 12:56:46 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:46.357 12:56:46 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:46.357 12:56:46 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:25:46.357 12:56:46 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:46.357 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:46.357 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:46.357 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:46.357 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:46.357 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:46.642 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:46.642 { 00:25:46.642 "name": "2e801740-dcd1-40fe-9569-2a5024ae56ef", 00:25:46.642 "aliases": [ 00:25:46.642 "lvs/nvme0n1p0" 00:25:46.642 ], 00:25:46.642 "product_name": "Logical Volume", 00:25:46.642 "block_size": 4096, 00:25:46.642 "num_blocks": 26476544, 00:25:46.642 "uuid": "2e801740-dcd1-40fe-9569-2a5024ae56ef", 00:25:46.642 "assigned_rate_limits": { 00:25:46.642 "rw_ios_per_sec": 0, 00:25:46.642 "rw_mbytes_per_sec": 0, 00:25:46.642 "r_mbytes_per_sec": 0, 00:25:46.642 "w_mbytes_per_sec": 0 00:25:46.642 }, 00:25:46.642 "claimed": false, 00:25:46.642 "zoned": false, 00:25:46.643 "supported_io_types": { 00:25:46.643 "read": true, 00:25:46.643 "write": true, 00:25:46.643 "unmap": true, 00:25:46.643 "flush": false, 00:25:46.643 "reset": true, 00:25:46.643 "nvme_admin": false, 00:25:46.643 "nvme_io": false, 00:25:46.643 "nvme_io_md": false, 00:25:46.643 "write_zeroes": true, 00:25:46.643 "zcopy": false, 00:25:46.643 "get_zone_info": false, 00:25:46.643 "zone_management": false, 00:25:46.643 "zone_append": false, 00:25:46.643 "compare": false, 00:25:46.643 "compare_and_write": false, 00:25:46.643 "abort": false, 00:25:46.643 "seek_hole": true, 00:25:46.643 "seek_data": true, 00:25:46.643 "copy": false, 00:25:46.643 "nvme_iov_md": false 00:25:46.643 }, 00:25:46.643 "driver_specific": { 00:25:46.643 "lvol": { 00:25:46.643 "lvol_store_uuid": "0bbf29d8-faad-430a-b7ff-ec3f3421fdd8", 00:25:46.643 "base_bdev": "nvme0n1", 00:25:46.643 "thin_provision": true, 00:25:46.643 "num_allocated_clusters": 0, 00:25:46.643 "snapshot": false, 00:25:46.643 "clone": false, 00:25:46.643 "esnap_clone": false 00:25:46.643 } 00:25:46.643 } 00:25:46.643 } 00:25:46.643 ]' 00:25:46.643 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:46.643 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:46.643 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:46.643 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:46.643 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:46.643 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:46.643 12:56:46 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:25:46.643 12:56:46 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:25:46.643 12:56:46 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:46.900 12:56:46 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:46.900 12:56:46 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:46.900 12:56:46 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:46.900 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:46.900 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:46.900 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:46.900 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:46.900 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:47.158 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:47.158 { 00:25:47.158 "name": "2e801740-dcd1-40fe-9569-2a5024ae56ef", 00:25:47.158 "aliases": [ 00:25:47.158 "lvs/nvme0n1p0" 00:25:47.158 ], 00:25:47.158 "product_name": "Logical Volume", 00:25:47.158 "block_size": 4096, 00:25:47.158 "num_blocks": 26476544, 00:25:47.158 "uuid": "2e801740-dcd1-40fe-9569-2a5024ae56ef", 00:25:47.158 "assigned_rate_limits": { 00:25:47.158 "rw_ios_per_sec": 0, 00:25:47.158 "rw_mbytes_per_sec": 0, 00:25:47.158 "r_mbytes_per_sec": 0, 00:25:47.158 "w_mbytes_per_sec": 0 00:25:47.158 }, 00:25:47.158 "claimed": false, 00:25:47.158 "zoned": false, 00:25:47.158 "supported_io_types": { 00:25:47.158 "read": true, 00:25:47.158 "write": true, 00:25:47.158 "unmap": true, 00:25:47.158 "flush": false, 00:25:47.158 "reset": true, 00:25:47.158 "nvme_admin": false, 00:25:47.158 "nvme_io": false, 00:25:47.158 "nvme_io_md": false, 00:25:47.158 "write_zeroes": true, 00:25:47.158 "zcopy": false, 00:25:47.158 "get_zone_info": false, 00:25:47.158 "zone_management": false, 00:25:47.158 "zone_append": false, 00:25:47.158 "compare": false, 00:25:47.158 "compare_and_write": false, 00:25:47.158 "abort": false, 00:25:47.158 "seek_hole": true, 00:25:47.158 "seek_data": true, 00:25:47.158 "copy": false, 00:25:47.158 "nvme_iov_md": false 00:25:47.158 }, 00:25:47.158 "driver_specific": { 00:25:47.158 "lvol": { 00:25:47.158 "lvol_store_uuid": "0bbf29d8-faad-430a-b7ff-ec3f3421fdd8", 00:25:47.158 "base_bdev": "nvme0n1", 00:25:47.158 "thin_provision": true, 00:25:47.158 "num_allocated_clusters": 0, 00:25:47.158 "snapshot": false, 00:25:47.158 "clone": false, 00:25:47.158 "esnap_clone": false 00:25:47.158 } 00:25:47.158 } 00:25:47.158 } 00:25:47.158 ]' 00:25:47.158 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:47.158 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:47.158 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:47.158 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:47.158 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:47.158 12:56:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:47.158 12:56:46 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:25:47.158 12:56:46 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:47.416 12:56:47 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:47.416 12:56:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:47.416 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:47.416 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:47.416 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:47.416 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:47.416 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e801740-dcd1-40fe-9569-2a5024ae56ef 00:25:47.674 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:47.674 { 00:25:47.674 "name": "2e801740-dcd1-40fe-9569-2a5024ae56ef", 00:25:47.674 "aliases": [ 00:25:47.674 "lvs/nvme0n1p0" 00:25:47.674 ], 00:25:47.674 "product_name": "Logical Volume", 00:25:47.674 "block_size": 4096, 00:25:47.674 "num_blocks": 26476544, 00:25:47.674 "uuid": "2e801740-dcd1-40fe-9569-2a5024ae56ef", 00:25:47.674 "assigned_rate_limits": { 00:25:47.674 "rw_ios_per_sec": 0, 00:25:47.674 "rw_mbytes_per_sec": 0, 00:25:47.674 "r_mbytes_per_sec": 0, 00:25:47.674 "w_mbytes_per_sec": 0 00:25:47.674 }, 00:25:47.674 "claimed": false, 00:25:47.674 "zoned": false, 00:25:47.674 "supported_io_types": { 00:25:47.674 "read": true, 00:25:47.674 "write": true, 00:25:47.674 "unmap": true, 00:25:47.674 "flush": false, 00:25:47.674 "reset": true, 00:25:47.674 "nvme_admin": false, 00:25:47.674 "nvme_io": false, 00:25:47.674 "nvme_io_md": false, 00:25:47.674 "write_zeroes": true, 00:25:47.674 "zcopy": false, 00:25:47.674 "get_zone_info": false, 00:25:47.674 "zone_management": false, 00:25:47.674 "zone_append": false, 00:25:47.674 "compare": false, 00:25:47.674 "compare_and_write": false, 00:25:47.674 "abort": false, 00:25:47.674 "seek_hole": true, 00:25:47.674 "seek_data": true, 00:25:47.674 "copy": false, 00:25:47.674 "nvme_iov_md": false 00:25:47.674 }, 00:25:47.674 "driver_specific": { 00:25:47.674 "lvol": { 00:25:47.674 "lvol_store_uuid": "0bbf29d8-faad-430a-b7ff-ec3f3421fdd8", 00:25:47.674 "base_bdev": "nvme0n1", 00:25:47.674 "thin_provision": true, 00:25:47.674 "num_allocated_clusters": 0, 00:25:47.674 "snapshot": false, 00:25:47.674 "clone": false, 00:25:47.674 "esnap_clone": false 00:25:47.674 } 00:25:47.674 } 00:25:47.674 } 00:25:47.674 ]' 00:25:47.674 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:47.674 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:47.674 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:47.674 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:47.674 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:47.674 12:56:47 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:47.674 12:56:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:47.674 12:56:47 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2e801740-dcd1-40fe-9569-2a5024ae56ef --l2p_dram_limit 10' 00:25:47.674 12:56:47 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:47.674 12:56:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:47.674 12:56:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:47.674 12:56:47 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:25:47.674 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:25:47.674 12:56:47 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2e801740-dcd1-40fe-9569-2a5024ae56ef --l2p_dram_limit 10 -c nvc0n1p0 00:25:47.933 [2024-12-05 12:56:47.703412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.933 [2024-12-05 12:56:47.703476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:47.933 [2024-12-05 12:56:47.703489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:47.933 [2024-12-05 12:56:47.703497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.933 [2024-12-05 12:56:47.703557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.933 [2024-12-05 12:56:47.703572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.933 [2024-12-05 12:56:47.703580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:47.933 [2024-12-05 12:56:47.703593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.933 [2024-12-05 12:56:47.703617] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:47.933 [2024-12-05 12:56:47.703904] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:47.933 [2024-12-05 12:56:47.703916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.933 [2024-12-05 12:56:47.703924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.933 [2024-12-05 12:56:47.703931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:25:47.933 [2024-12-05 12:56:47.703939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.933 [2024-12-05 12:56:47.703996] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 10d2bf2b-14bb-46ff-9a7a-1879f8c5a96d 00:25:47.933 [2024-12-05 12:56:47.705257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.933 [2024-12-05 12:56:47.705449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:47.933 [2024-12-05 12:56:47.705468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:47.933 [2024-12-05 12:56:47.705475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.933 [2024-12-05 12:56:47.712201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.933 [2024-12-05 12:56:47.712331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.933 [2024-12-05 12:56:47.712347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:25:47.933 [2024-12-05 12:56:47.712354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.933 [2024-12-05 12:56:47.712436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.933 [2024-12-05 12:56:47.712444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.933 [2024-12-05 12:56:47.712452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:47.933 [2024-12-05 12:56:47.712459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.933 [2024-12-05 12:56:47.712510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.933 [2024-12-05 12:56:47.712517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:47.933 [2024-12-05 12:56:47.712526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:47.933 [2024-12-05 12:56:47.712532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.933 [2024-12-05 12:56:47.712563] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:47.933 [2024-12-05 12:56:47.714284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.933 [2024-12-05 12:56:47.714313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.933 [2024-12-05 12:56:47.714321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:25:47.934 [2024-12-05 12:56:47.714329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.934 [2024-12-05 12:56:47.714363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.934 [2024-12-05 12:56:47.714372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:47.934 [2024-12-05 12:56:47.714379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:47.934 [2024-12-05 12:56:47.714389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.934 [2024-12-05 12:56:47.714404] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:47.934 [2024-12-05 12:56:47.714537] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:47.934 [2024-12-05 12:56:47.714547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:47.934 [2024-12-05 12:56:47.714558] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:47.934 [2024-12-05 12:56:47.714566] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:47.934 [2024-12-05 12:56:47.714579] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:47.934 [2024-12-05 12:56:47.714586] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:47.934 [2024-12-05 12:56:47.714596] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:47.934 [2024-12-05 12:56:47.714602] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:47.934 [2024-12-05 12:56:47.714609] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:47.934 [2024-12-05 12:56:47.714616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.934 [2024-12-05 12:56:47.714624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:47.934 [2024-12-05 12:56:47.714630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:25:47.934 [2024-12-05 12:56:47.714638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.934 [2024-12-05 12:56:47.714708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.934 [2024-12-05 12:56:47.714719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:47.934 [2024-12-05 12:56:47.714725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:47.934 [2024-12-05 12:56:47.714738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.934 [2024-12-05 12:56:47.714832] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:47.934 [2024-12-05 12:56:47.714842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:47.934 [2024-12-05 12:56:47.714849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.934 [2024-12-05 12:56:47.714858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.934 [2024-12-05 12:56:47.714865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:47.934 [2024-12-05 12:56:47.714872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:47.934 [2024-12-05 12:56:47.714878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:47.934 [2024-12-05 12:56:47.714886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:47.934 [2024-12-05 12:56:47.714892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:47.934 [2024-12-05 12:56:47.714898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.934 [2024-12-05 12:56:47.714904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:47.934 [2024-12-05 12:56:47.714910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:47.934 [2024-12-05 12:56:47.714916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.934 [2024-12-05 12:56:47.714925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:47.934 [2024-12-05 12:56:47.714932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:47.934 [2024-12-05 12:56:47.714940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.934 [2024-12-05 12:56:47.714949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:47.934 [2024-12-05 12:56:47.714957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:47.934 [2024-12-05 12:56:47.714963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.934 [2024-12-05 12:56:47.714972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:47.934 [2024-12-05 12:56:47.714978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:47.934 [2024-12-05 12:56:47.714986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.934 [2024-12-05 12:56:47.714992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:47.934 [2024-12-05 12:56:47.715000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:47.934 [2024-12-05 12:56:47.715006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.934 [2024-12-05 12:56:47.715014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:47.934 [2024-12-05 12:56:47.715020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:47.934 [2024-12-05 12:56:47.715028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.934 [2024-12-05 12:56:47.715034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:47.934 [2024-12-05 12:56:47.715044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:47.934 [2024-12-05 12:56:47.715050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.934 [2024-12-05 12:56:47.715058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:47.934 [2024-12-05 12:56:47.715064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:47.934 [2024-12-05 12:56:47.715072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.934 [2024-12-05 12:56:47.715078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:47.934 [2024-12-05 12:56:47.715085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:47.934 [2024-12-05 12:56:47.715091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.934 [2024-12-05 12:56:47.715099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:47.934 [2024-12-05 12:56:47.715106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:47.934 [2024-12-05 12:56:47.715113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.934 [2024-12-05 12:56:47.715119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:47.934 [2024-12-05 12:56:47.715127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:47.934 [2024-12-05 12:56:47.715133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.934 [2024-12-05 12:56:47.715140] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:47.934 [2024-12-05 12:56:47.715147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:47.934 [2024-12-05 12:56:47.715157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.934 [2024-12-05 12:56:47.715163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.934 [2024-12-05 12:56:47.715171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:47.934 [2024-12-05 12:56:47.715179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:47.934 [2024-12-05 12:56:47.715187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:47.934 [2024-12-05 12:56:47.715194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:47.934 [2024-12-05 12:56:47.715201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:47.934 [2024-12-05 12:56:47.715209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:47.934 [2024-12-05 12:56:47.715218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:47.934 [2024-12-05 12:56:47.715229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.934 [2024-12-05 12:56:47.715242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:47.934 [2024-12-05 12:56:47.715248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:47.934 [2024-12-05 12:56:47.715257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:47.934 [2024-12-05 12:56:47.715264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:47.934 [2024-12-05 12:56:47.715272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:47.934 [2024-12-05 12:56:47.715279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:47.934 [2024-12-05 12:56:47.715288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:47.934 [2024-12-05 12:56:47.715295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:47.934 [2024-12-05 12:56:47.715303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:47.934 [2024-12-05 12:56:47.715310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:47.934 [2024-12-05 12:56:47.715318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:47.934 [2024-12-05 12:56:47.715324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:47.934 [2024-12-05 12:56:47.715332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:47.934 [2024-12-05 12:56:47.715338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:47.934 [2024-12-05 12:56:47.715346] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:47.934 [2024-12-05 12:56:47.715354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.934 [2024-12-05 12:56:47.715362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:47.934 [2024-12-05 12:56:47.715369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:47.934 [2024-12-05 12:56:47.715376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:47.934 [2024-12-05 12:56:47.715381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:47.935 [2024-12-05 12:56:47.715389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.935 [2024-12-05 12:56:47.715397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:47.935 [2024-12-05 12:56:47.715407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:25:47.935 [2024-12-05 12:56:47.715412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.935 [2024-12-05 12:56:47.715455] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:47.935 [2024-12-05 12:56:47.715463] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:50.462 [2024-12-05 12:56:49.975606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:49.975887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:50.462 [2024-12-05 12:56:49.975917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2260.127 ms 00:25:50.462 [2024-12-05 12:56:49.975927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-05 12:56:49.987403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:49.987629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.462 [2024-12-05 12:56:49.987653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.376 ms 00:25:50.462 [2024-12-05 12:56:49.987663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-05 12:56:49.987831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:49.987842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:50.462 [2024-12-05 12:56:49.987853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:50.462 [2024-12-05 12:56:49.987860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-05 12:56:49.999148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:49.999203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.462 [2024-12-05 12:56:49.999220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.235 ms 00:25:50.462 [2024-12-05 12:56:49.999232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-05 12:56:49.999283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:49.999292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.462 [2024-12-05 12:56:49.999303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:50.462 [2024-12-05 12:56:49.999310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-05 12:56:49.999778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:49.999826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.462 [2024-12-05 12:56:49.999841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:25:50.462 [2024-12-05 12:56:49.999850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-05 12:56:50.000001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:50.000017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.462 [2024-12-05 12:56:50.000029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:25:50.462 [2024-12-05 12:56:50.000038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-05 12:56:50.007171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:50.007214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.462 [2024-12-05 12:56:50.007228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.108 ms 00:25:50.462 [2024-12-05 12:56:50.007236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.462 [2024-12-05 12:56:50.026922] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:50.462 [2024-12-05 12:56:50.030680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.462 [2024-12-05 12:56:50.030926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:50.462 [2024-12-05 12:56:50.030957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.325 ms 00:25:50.462 [2024-12-05 12:56:50.030971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.078607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.078707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:50.463 [2024-12-05 12:56:50.078727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.557 ms 00:25:50.463 [2024-12-05 12:56:50.078743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.078975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.078991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:50.463 [2024-12-05 12:56:50.079000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:25:50.463 [2024-12-05 12:56:50.079010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.082359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.082414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:50.463 [2024-12-05 12:56:50.082429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.318 ms 00:25:50.463 [2024-12-05 12:56:50.082440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.085064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.085238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:50.463 [2024-12-05 12:56:50.085256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:25:50.463 [2024-12-05 12:56:50.085266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.085605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.085625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:50.463 [2024-12-05 12:56:50.085634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:25:50.463 [2024-12-05 12:56:50.085647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.110692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.110752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:50.463 [2024-12-05 12:56:50.110769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.023 ms 00:25:50.463 [2024-12-05 12:56:50.110780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.115052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.115101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:50.463 [2024-12-05 12:56:50.115115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.205 ms 00:25:50.463 [2024-12-05 12:56:50.115127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.118343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.118390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:50.463 [2024-12-05 12:56:50.118401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.128 ms 00:25:50.463 [2024-12-05 12:56:50.118412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.121466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.121509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:50.463 [2024-12-05 12:56:50.121521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:25:50.463 [2024-12-05 12:56:50.121536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.121582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.121601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:50.463 [2024-12-05 12:56:50.121612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:50.463 [2024-12-05 12:56:50.121623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.121694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.463 [2024-12-05 12:56:50.121706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:50.463 [2024-12-05 12:56:50.121716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:50.463 [2024-12-05 12:56:50.121729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.463 [2024-12-05 12:56:50.122774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2418.935 ms, result 0 00:25:50.463 { 00:25:50.463 "name": "ftl0", 00:25:50.463 "uuid": "10d2bf2b-14bb-46ff-9a7a-1879f8c5a96d" 00:25:50.463 } 00:25:50.463 12:56:50 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:50.463 12:56:50 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:50.722 12:56:50 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:25:50.722 12:56:50 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:50.722 [2024-12-05 12:56:50.543587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.543657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:50.722 [2024-12-05 12:56:50.543678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:50.722 [2024-12-05 12:56:50.543686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.543714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:50.722 [2024-12-05 12:56:50.544329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.544358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:50.722 [2024-12-05 12:56:50.544368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:25:50.722 [2024-12-05 12:56:50.544381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.544640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.544662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:50.722 [2024-12-05 12:56:50.544675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:25:50.722 [2024-12-05 12:56:50.544686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.547937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.547964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:50.722 [2024-12-05 12:56:50.547974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:25:50.722 [2024-12-05 12:56:50.547985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.554249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.554293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:50.722 [2024-12-05 12:56:50.554304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.242 ms 00:25:50.722 [2024-12-05 12:56:50.554317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.555869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.555912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:50.722 [2024-12-05 12:56:50.555921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:25:50.722 [2024-12-05 12:56:50.555931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.559783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.559966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:50.722 [2024-12-05 12:56:50.559982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.817 ms 00:25:50.722 [2024-12-05 12:56:50.559992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.560121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.560138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:50.722 [2024-12-05 12:56:50.560149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:50.722 [2024-12-05 12:56:50.560159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.561833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.561869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:50.722 [2024-12-05 12:56:50.561878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:25:50.722 [2024-12-05 12:56:50.561888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.562952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.563067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:50.722 [2024-12-05 12:56:50.563081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:25:50.722 [2024-12-05 12:56:50.563090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.563975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.564005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:50.722 [2024-12-05 12:56:50.564014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:25:50.722 [2024-12-05 12:56:50.564022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.564903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.722 [2024-12-05 12:56:50.564938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:50.722 [2024-12-05 12:56:50.564946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:25:50.722 [2024-12-05 12:56:50.564956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.722 [2024-12-05 12:56:50.564986] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:50.723 [2024-12-05 12:56:50.565007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:50.723 [2024-12-05 12:56:50.565800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:50.724 [2024-12-05 12:56:50.565931] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:50.724 [2024-12-05 12:56:50.565939] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10d2bf2b-14bb-46ff-9a7a-1879f8c5a96d 00:25:50.724 [2024-12-05 12:56:50.565950] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:50.724 [2024-12-05 12:56:50.565958] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:50.724 [2024-12-05 12:56:50.565968] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:50.724 [2024-12-05 12:56:50.565976] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:50.724 [2024-12-05 12:56:50.565989] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:50.724 [2024-12-05 12:56:50.565999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:50.724 [2024-12-05 12:56:50.566009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:50.724 [2024-12-05 12:56:50.566015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:50.724 [2024-12-05 12:56:50.566023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:50.724 [2024-12-05 12:56:50.566030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.724 [2024-12-05 12:56:50.566040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:50.724 [2024-12-05 12:56:50.566048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:25:50.724 [2024-12-05 12:56:50.566057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.724 [2024-12-05 12:56:50.568220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.724 [2024-12-05 12:56:50.568330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:50.724 [2024-12-05 12:56:50.568390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:25:50.724 [2024-12-05 12:56:50.568424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.724 [2024-12-05 12:56:50.568573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.724 [2024-12-05 12:56:50.568606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:50.724 [2024-12-05 12:56:50.568713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:50.724 [2024-12-05 12:56:50.568739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.575411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.575580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.982 [2024-12-05 12:56:50.575638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.575695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.575783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.575860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.982 [2024-12-05 12:56:50.575889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.575933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.576107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.576181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.982 [2024-12-05 12:56:50.576240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.576268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.576323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.576383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.982 [2024-12-05 12:56:50.576435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.576461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.589011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.589233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.982 [2024-12-05 12:56:50.589369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.589454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.599327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.599531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.982 [2024-12-05 12:56:50.599588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.599613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.599720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.599750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:50.982 [2024-12-05 12:56:50.599802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.599839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.599944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.599974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:50.982 [2024-12-05 12:56:50.599995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.600044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.600170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.600199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:50.982 [2024-12-05 12:56:50.600249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.600262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.982 [2024-12-05 12:56:50.600302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.982 [2024-12-05 12:56:50.600314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:50.982 [2024-12-05 12:56:50.600322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.982 [2024-12-05 12:56:50.600336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.983 [2024-12-05 12:56:50.600377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.983 [2024-12-05 12:56:50.600390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:50.983 [2024-12-05 12:56:50.600398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.983 [2024-12-05 12:56:50.600407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.983 [2024-12-05 12:56:50.600462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.983 [2024-12-05 12:56:50.600473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:50.983 [2024-12-05 12:56:50.600481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.983 [2024-12-05 12:56:50.600490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.983 [2024-12-05 12:56:50.600631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.015 ms, result 0 00:25:50.983 true 00:25:50.983 12:56:50 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88305 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88305 ']' 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88305 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88305 00:25:50.983 killing process with pid 88305 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88305' 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88305 00:25:50.983 12:56:50 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88305 00:25:57.620 12:56:56 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:00.966 262144+0 records in 00:26:00.966 262144+0 records out 00:26:00.966 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.32915 s, 248 MB/s 00:26:00.966 12:57:00 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:03.510 12:57:02 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:03.510 [2024-12-05 12:57:02.804520] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:26:03.510 [2024-12-05 12:57:02.804676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88512 ] 00:26:03.510 [2024-12-05 12:57:02.965439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.510 [2024-12-05 12:57:02.991620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.510 [2024-12-05 12:57:03.097844] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:03.510 [2024-12-05 12:57:03.097942] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:03.510 [2024-12-05 12:57:03.253058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.253142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:03.510 [2024-12-05 12:57:03.253157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:03.510 [2024-12-05 12:57:03.253168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.253231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.253242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.510 [2024-12-05 12:57:03.253257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:03.510 [2024-12-05 12:57:03.253268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.253294] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:03.510 [2024-12-05 12:57:03.253606] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:03.510 [2024-12-05 12:57:03.253625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.253636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.510 [2024-12-05 12:57:03.253648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:26:03.510 [2024-12-05 12:57:03.253656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.255104] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:03.510 [2024-12-05 12:57:03.257878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.257917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:03.510 [2024-12-05 12:57:03.257936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:26:03.510 [2024-12-05 12:57:03.257948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.258028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.258039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:03.510 [2024-12-05 12:57:03.258051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:03.510 [2024-12-05 12:57:03.258059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.264730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.264978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.510 [2024-12-05 12:57:03.265005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.626 ms 00:26:03.510 [2024-12-05 12:57:03.265020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.265138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.265149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.510 [2024-12-05 12:57:03.265161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:26:03.510 [2024-12-05 12:57:03.265168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.265244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.265255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:03.510 [2024-12-05 12:57:03.265263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:03.510 [2024-12-05 12:57:03.265273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.265300] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:03.510 [2024-12-05 12:57:03.267036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.267066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.510 [2024-12-05 12:57:03.267076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:26:03.510 [2024-12-05 12:57:03.267084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.267125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.267134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:03.510 [2024-12-05 12:57:03.267143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:03.510 [2024-12-05 12:57:03.267154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.267189] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:03.510 [2024-12-05 12:57:03.267215] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:03.510 [2024-12-05 12:57:03.267257] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:03.510 [2024-12-05 12:57:03.267277] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:03.510 [2024-12-05 12:57:03.267386] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:03.510 [2024-12-05 12:57:03.267401] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:03.510 [2024-12-05 12:57:03.267414] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:03.510 [2024-12-05 12:57:03.267425] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:03.510 [2024-12-05 12:57:03.267434] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:03.510 [2024-12-05 12:57:03.267442] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:03.510 [2024-12-05 12:57:03.267450] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:03.510 [2024-12-05 12:57:03.267458] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:03.510 [2024-12-05 12:57:03.267465] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:03.510 [2024-12-05 12:57:03.267473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.267480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:03.510 [2024-12-05 12:57:03.267489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:26:03.510 [2024-12-05 12:57:03.267496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.267588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.510 [2024-12-05 12:57:03.267599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:03.510 [2024-12-05 12:57:03.267607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:03.510 [2024-12-05 12:57:03.267617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.510 [2024-12-05 12:57:03.267720] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:03.510 [2024-12-05 12:57:03.267731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:03.510 [2024-12-05 12:57:03.267746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.510 [2024-12-05 12:57:03.267761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.510 [2024-12-05 12:57:03.267770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:03.510 [2024-12-05 12:57:03.267778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:03.510 [2024-12-05 12:57:03.267786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:03.510 [2024-12-05 12:57:03.267796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:03.510 [2024-12-05 12:57:03.267832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:03.510 [2024-12-05 12:57:03.267841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.510 [2024-12-05 12:57:03.267849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:03.510 [2024-12-05 12:57:03.267860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:03.510 [2024-12-05 12:57:03.267868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:03.510 [2024-12-05 12:57:03.267876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:03.510 [2024-12-05 12:57:03.267885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:03.510 [2024-12-05 12:57:03.267893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.510 [2024-12-05 12:57:03.267901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:03.510 [2024-12-05 12:57:03.267908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:03.511 [2024-12-05 12:57:03.267915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.511 [2024-12-05 12:57:03.267923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:03.511 [2024-12-05 12:57:03.267931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:03.511 [2024-12-05 12:57:03.267938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.511 [2024-12-05 12:57:03.267946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:03.511 [2024-12-05 12:57:03.267954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:03.511 [2024-12-05 12:57:03.267962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.511 [2024-12-05 12:57:03.267970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:03.511 [2024-12-05 12:57:03.267977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:03.511 [2024-12-05 12:57:03.267990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.511 [2024-12-05 12:57:03.267998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:03.511 [2024-12-05 12:57:03.268006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:03.511 [2024-12-05 12:57:03.268013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:03.511 [2024-12-05 12:57:03.268021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:03.511 [2024-12-05 12:57:03.268028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:03.511 [2024-12-05 12:57:03.268036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.511 [2024-12-05 12:57:03.268043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:03.511 [2024-12-05 12:57:03.268051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:03.511 [2024-12-05 12:57:03.268058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:03.511 [2024-12-05 12:57:03.268066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:03.511 [2024-12-05 12:57:03.268074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:03.511 [2024-12-05 12:57:03.268081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.511 [2024-12-05 12:57:03.268088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:03.511 [2024-12-05 12:57:03.268097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:03.511 [2024-12-05 12:57:03.268105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.511 [2024-12-05 12:57:03.268114] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:03.511 [2024-12-05 12:57:03.268125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:03.511 [2024-12-05 12:57:03.268134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:03.511 [2024-12-05 12:57:03.268142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:03.511 [2024-12-05 12:57:03.268151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:03.511 [2024-12-05 12:57:03.268158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:03.511 [2024-12-05 12:57:03.268164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:03.511 [2024-12-05 12:57:03.268171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:03.511 [2024-12-05 12:57:03.268178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:03.511 [2024-12-05 12:57:03.268185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:03.511 [2024-12-05 12:57:03.268193] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:03.511 [2024-12-05 12:57:03.268202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.511 [2024-12-05 12:57:03.268211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:03.511 [2024-12-05 12:57:03.268218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:03.511 [2024-12-05 12:57:03.268225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:03.511 [2024-12-05 12:57:03.268232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:03.511 [2024-12-05 12:57:03.268242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:03.511 [2024-12-05 12:57:03.268250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:03.511 [2024-12-05 12:57:03.268257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:03.511 [2024-12-05 12:57:03.268264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:03.511 [2024-12-05 12:57:03.268270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:03.511 [2024-12-05 12:57:03.268278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:03.511 [2024-12-05 12:57:03.268285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:03.511 [2024-12-05 12:57:03.268292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:03.511 [2024-12-05 12:57:03.268299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:03.511 [2024-12-05 12:57:03.268306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:03.511 [2024-12-05 12:57:03.268313] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:03.511 [2024-12-05 12:57:03.268321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:03.511 [2024-12-05 12:57:03.268329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:03.511 [2024-12-05 12:57:03.268336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:03.511 [2024-12-05 12:57:03.268343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:03.511 [2024-12-05 12:57:03.268350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:03.511 [2024-12-05 12:57:03.268360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.268373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:03.511 [2024-12-05 12:57:03.268381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:26:03.511 [2024-12-05 12:57:03.268390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.279964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.280202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.511 [2024-12-05 12:57:03.280222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.528 ms 00:26:03.511 [2024-12-05 12:57:03.280231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.280336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.280346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:03.511 [2024-12-05 12:57:03.280354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:03.511 [2024-12-05 12:57:03.280368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.298649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.298733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.511 [2024-12-05 12:57:03.298752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.199 ms 00:26:03.511 [2024-12-05 12:57:03.298764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.298885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.298901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.511 [2024-12-05 12:57:03.298914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:03.511 [2024-12-05 12:57:03.298924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.299463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.299500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.511 [2024-12-05 12:57:03.299515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:26:03.511 [2024-12-05 12:57:03.299527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.299723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.299743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.511 [2024-12-05 12:57:03.299762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:26:03.511 [2024-12-05 12:57:03.299772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.307186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.307241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.511 [2024-12-05 12:57:03.307256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.386 ms 00:26:03.511 [2024-12-05 12:57:03.307268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.310265] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:03.511 [2024-12-05 12:57:03.310314] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:03.511 [2024-12-05 12:57:03.310328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.310338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:03.511 [2024-12-05 12:57:03.310348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:26:03.511 [2024-12-05 12:57:03.310356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.511 [2024-12-05 12:57:03.325204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.511 [2024-12-05 12:57:03.325298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:03.511 [2024-12-05 12:57:03.325312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.795 ms 00:26:03.511 [2024-12-05 12:57:03.325321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.512 [2024-12-05 12:57:03.328068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.512 [2024-12-05 12:57:03.328244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:03.512 [2024-12-05 12:57:03.328263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:26:03.512 [2024-12-05 12:57:03.328273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.512 [2024-12-05 12:57:03.329785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.512 [2024-12-05 12:57:03.329835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:03.512 [2024-12-05 12:57:03.329845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:26:03.512 [2024-12-05 12:57:03.329854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.512 [2024-12-05 12:57:03.330244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.512 [2024-12-05 12:57:03.330269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:03.512 [2024-12-05 12:57:03.330279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:26:03.512 [2024-12-05 12:57:03.330286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.512 [2024-12-05 12:57:03.348194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.512 [2024-12-05 12:57:03.348282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:03.512 [2024-12-05 12:57:03.348296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.888 ms 00:26:03.512 [2024-12-05 12:57:03.348305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.512 [2024-12-05 12:57:03.356540] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:03.769 [2024-12-05 12:57:03.360051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.769 [2024-12-05 12:57:03.360096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:03.769 [2024-12-05 12:57:03.360116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.687 ms 00:26:03.769 [2024-12-05 12:57:03.360125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.769 [2024-12-05 12:57:03.360265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.769 [2024-12-05 12:57:03.360277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:03.769 [2024-12-05 12:57:03.360290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:03.769 [2024-12-05 12:57:03.360298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.769 [2024-12-05 12:57:03.360368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.769 [2024-12-05 12:57:03.360378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:03.769 [2024-12-05 12:57:03.360389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:03.769 [2024-12-05 12:57:03.360397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.769 [2024-12-05 12:57:03.360416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.769 [2024-12-05 12:57:03.360425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:03.769 [2024-12-05 12:57:03.360434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:03.769 [2024-12-05 12:57:03.360441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.769 [2024-12-05 12:57:03.360475] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:03.769 [2024-12-05 12:57:03.360486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.769 [2024-12-05 12:57:03.360498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:03.769 [2024-12-05 12:57:03.360510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:03.769 [2024-12-05 12:57:03.360522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.769 [2024-12-05 12:57:03.364177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.769 [2024-12-05 12:57:03.364218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:03.769 [2024-12-05 12:57:03.364237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:26:03.769 [2024-12-05 12:57:03.364248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.769 [2024-12-05 12:57:03.364320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.769 [2024-12-05 12:57:03.364331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:03.769 [2024-12-05 12:57:03.364340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:03.769 [2024-12-05 12:57:03.364352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.769 [2024-12-05 12:57:03.365424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.918 ms, result 0 00:26:04.763  [2024-12-05T12:57:05.545Z] Copying: 43/1024 [MB] (43 MBps) [2024-12-05T12:57:06.531Z] Copying: 85/1024 [MB] (42 MBps) [2024-12-05T12:57:07.463Z] Copying: 126/1024 [MB] (40 MBps) [2024-12-05T12:57:08.446Z] Copying: 163/1024 [MB] (37 MBps) [2024-12-05T12:57:09.817Z] Copying: 210/1024 [MB] (47 MBps) [2024-12-05T12:57:10.383Z] Copying: 255/1024 [MB] (45 MBps) [2024-12-05T12:57:11.756Z] Copying: 303/1024 [MB] (47 MBps) [2024-12-05T12:57:12.387Z] Copying: 348/1024 [MB] (45 MBps) [2024-12-05T12:57:13.757Z] Copying: 394/1024 [MB] (46 MBps) [2024-12-05T12:57:14.690Z] Copying: 438/1024 [MB] (43 MBps) [2024-12-05T12:57:15.621Z] Copying: 483/1024 [MB] (44 MBps) [2024-12-05T12:57:16.552Z] Copying: 527/1024 [MB] (44 MBps) [2024-12-05T12:57:17.486Z] Copying: 569/1024 [MB] (41 MBps) [2024-12-05T12:57:18.418Z] Copying: 608/1024 [MB] (39 MBps) [2024-12-05T12:57:19.793Z] Copying: 649/1024 [MB] (40 MBps) [2024-12-05T12:57:20.731Z] Copying: 690/1024 [MB] (40 MBps) [2024-12-05T12:57:21.662Z] Copying: 731/1024 [MB] (40 MBps) [2024-12-05T12:57:22.591Z] Copying: 770/1024 [MB] (39 MBps) [2024-12-05T12:57:23.522Z] Copying: 811/1024 [MB] (40 MBps) [2024-12-05T12:57:24.455Z] Copying: 850/1024 [MB] (38 MBps) [2024-12-05T12:57:25.387Z] Copying: 886/1024 [MB] (36 MBps) [2024-12-05T12:57:26.757Z] Copying: 925/1024 [MB] (38 MBps) [2024-12-05T12:57:27.690Z] Copying: 964/1024 [MB] (39 MBps) [2024-12-05T12:57:27.976Z] Copying: 1003/1024 [MB] (38 MBps) [2024-12-05T12:57:27.976Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-12-05 12:57:27.918314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.918375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.124 [2024-12-05 12:57:27.918393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:28.124 [2024-12-05 12:57:27.918415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.918436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.124 [2024-12-05 12:57:27.919022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.919053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.124 [2024-12-05 12:57:27.919063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:26:28.124 [2024-12-05 12:57:27.919071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.920691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.920852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.124 [2024-12-05 12:57:27.920869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:26:28.124 [2024-12-05 12:57:27.920878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.933923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.933964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.124 [2024-12-05 12:57:27.933975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.021 ms 00:26:28.124 [2024-12-05 12:57:27.933984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.940223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.940386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.124 [2024-12-05 12:57:27.940401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.208 ms 00:26:28.124 [2024-12-05 12:57:27.940410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.941756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.941796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:28.124 [2024-12-05 12:57:27.941820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:26:28.124 [2024-12-05 12:57:27.941829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.945224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.945257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:28.124 [2024-12-05 12:57:27.945274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:26:28.124 [2024-12-05 12:57:27.945283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.945417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.945432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:28.124 [2024-12-05 12:57:27.945441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:28.124 [2024-12-05 12:57:27.945461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.946970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.947095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:28.124 [2024-12-05 12:57:27.947110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.490 ms 00:26:28.124 [2024-12-05 12:57:27.947117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.948279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.948313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:28.124 [2024-12-05 12:57:27.948322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:26:28.124 [2024-12-05 12:57:27.948329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.949187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.949218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:28.124 [2024-12-05 12:57:27.949228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:26:28.124 [2024-12-05 12:57:27.949235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.950157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.124 [2024-12-05 12:57:27.950267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:28.124 [2024-12-05 12:57:27.950280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:26:28.124 [2024-12-05 12:57:27.950288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.124 [2024-12-05 12:57:27.950315] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:28.124 [2024-12-05 12:57:27.950337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:28.124 [2024-12-05 12:57:27.950351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:28.124 [2024-12-05 12:57:27.950359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:28.124 [2024-12-05 12:57:27.950367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.950992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:28.125 [2024-12-05 12:57:27.951313] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:28.125 [2024-12-05 12:57:27.951321] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10d2bf2b-14bb-46ff-9a7a-1879f8c5a96d 00:26:28.125 [2024-12-05 12:57:27.951330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:28.125 [2024-12-05 12:57:27.951337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:28.125 [2024-12-05 12:57:27.951344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:28.125 [2024-12-05 12:57:27.951352] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:28.125 [2024-12-05 12:57:27.951359] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:28.125 [2024-12-05 12:57:27.951367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:28.125 [2024-12-05 12:57:27.951374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:28.125 [2024-12-05 12:57:27.951384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:28.125 [2024-12-05 12:57:27.951407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:28.125 [2024-12-05 12:57:27.951418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.125 [2024-12-05 12:57:27.951432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:28.125 [2024-12-05 12:57:27.951441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:26:28.125 [2024-12-05 12:57:27.951448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.125 [2024-12-05 12:57:27.953271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.125 [2024-12-05 12:57:27.953395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:28.125 [2024-12-05 12:57:27.953410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.806 ms 00:26:28.125 [2024-12-05 12:57:27.953418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.125 [2024-12-05 12:57:27.953538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.125 [2024-12-05 12:57:27.953552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:28.125 [2024-12-05 12:57:27.953561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:26:28.125 [2024-12-05 12:57:27.953572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.125 [2024-12-05 12:57:27.959587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.125 [2024-12-05 12:57:27.959625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:28.125 [2024-12-05 12:57:27.959635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.125 [2024-12-05 12:57:27.959649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.125 [2024-12-05 12:57:27.959714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.125 [2024-12-05 12:57:27.959723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:28.125 [2024-12-05 12:57:27.959731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.125 [2024-12-05 12:57:27.959742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.125 [2024-12-05 12:57:27.959782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.125 [2024-12-05 12:57:27.959792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:28.125 [2024-12-05 12:57:27.959800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.125 [2024-12-05 12:57:27.959825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.125 [2024-12-05 12:57:27.959841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.125 [2024-12-05 12:57:27.959851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:28.125 [2024-12-05 12:57:27.959859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.125 [2024-12-05 12:57:27.959867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.125 [2024-12-05 12:57:27.971585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.385 [2024-12-05 12:57:27.971781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:28.385 [2024-12-05 12:57:27.971823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.385 [2024-12-05 12:57:27.971832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.385 [2024-12-05 12:57:27.980768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.385 [2024-12-05 12:57:27.980828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:28.385 [2024-12-05 12:57:27.980842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.385 [2024-12-05 12:57:27.980851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.385 [2024-12-05 12:57:27.980912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.385 [2024-12-05 12:57:27.980922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:28.385 [2024-12-05 12:57:27.980930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.385 [2024-12-05 12:57:27.980945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.385 [2024-12-05 12:57:27.980971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.385 [2024-12-05 12:57:27.980979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:28.385 [2024-12-05 12:57:27.980990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.385 [2024-12-05 12:57:27.980998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.385 [2024-12-05 12:57:27.981069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.385 [2024-12-05 12:57:27.981201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:28.385 [2024-12-05 12:57:27.981215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.385 [2024-12-05 12:57:27.981223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.385 [2024-12-05 12:57:27.981260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.385 [2024-12-05 12:57:27.981270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:28.385 [2024-12-05 12:57:27.981279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.385 [2024-12-05 12:57:27.981290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.385 [2024-12-05 12:57:27.981329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.385 [2024-12-05 12:57:27.981338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:28.385 [2024-12-05 12:57:27.981345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.385 [2024-12-05 12:57:27.981353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.385 [2024-12-05 12:57:27.981407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:28.385 [2024-12-05 12:57:27.981418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:28.385 [2024-12-05 12:57:27.981430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:28.385 [2024-12-05 12:57:27.981438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.385 [2024-12-05 12:57:27.981565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.214 ms, result 0 00:26:30.284 00:26:30.284 00:26:30.284 12:57:29 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:30.284 [2024-12-05 12:57:29.991893] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:26:30.285 [2024-12-05 12:57:29.992236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88786 ] 00:26:30.543 [2024-12-05 12:57:30.155262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.543 [2024-12-05 12:57:30.189411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.543 [2024-12-05 12:57:30.303633] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:30.543 [2024-12-05 12:57:30.303719] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:30.803 [2024-12-05 12:57:30.458304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.458384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:30.803 [2024-12-05 12:57:30.458398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:30.803 [2024-12-05 12:57:30.458408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.458467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.458482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:30.803 [2024-12-05 12:57:30.458491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:30.803 [2024-12-05 12:57:30.458503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.458527] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:30.803 [2024-12-05 12:57:30.458841] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:30.803 [2024-12-05 12:57:30.458859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.458869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:30.803 [2024-12-05 12:57:30.458880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:26:30.803 [2024-12-05 12:57:30.458888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.460251] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:30.803 [2024-12-05 12:57:30.462986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.463027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:30.803 [2024-12-05 12:57:30.463045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:26:30.803 [2024-12-05 12:57:30.463057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.463129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.463144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:30.803 [2024-12-05 12:57:30.463155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:30.803 [2024-12-05 12:57:30.463163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.469522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.469730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:30.803 [2024-12-05 12:57:30.469759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.298 ms 00:26:30.803 [2024-12-05 12:57:30.469774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.469919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.469935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:30.803 [2024-12-05 12:57:30.469944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:26:30.803 [2024-12-05 12:57:30.469952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.470026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.470041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:30.803 [2024-12-05 12:57:30.470050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:30.803 [2024-12-05 12:57:30.470061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.470090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:30.803 [2024-12-05 12:57:30.471754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.471789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:30.803 [2024-12-05 12:57:30.471799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:26:30.803 [2024-12-05 12:57:30.471823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.471863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.471875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:30.803 [2024-12-05 12:57:30.471884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:30.803 [2024-12-05 12:57:30.471895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.803 [2024-12-05 12:57:30.471928] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:30.803 [2024-12-05 12:57:30.471951] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:30.803 [2024-12-05 12:57:30.471990] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:30.803 [2024-12-05 12:57:30.472014] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:30.803 [2024-12-05 12:57:30.472119] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:30.803 [2024-12-05 12:57:30.472130] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:30.803 [2024-12-05 12:57:30.472145] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:30.803 [2024-12-05 12:57:30.472155] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:30.803 [2024-12-05 12:57:30.472165] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:30.803 [2024-12-05 12:57:30.472173] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:30.803 [2024-12-05 12:57:30.472184] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:30.803 [2024-12-05 12:57:30.472192] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:30.803 [2024-12-05 12:57:30.472199] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:30.803 [2024-12-05 12:57:30.472207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.803 [2024-12-05 12:57:30.472215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:30.804 [2024-12-05 12:57:30.472222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:26:30.804 [2024-12-05 12:57:30.472231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.804 [2024-12-05 12:57:30.472319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.804 [2024-12-05 12:57:30.472327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:30.804 [2024-12-05 12:57:30.472336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:30.804 [2024-12-05 12:57:30.472346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.804 [2024-12-05 12:57:30.472452] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:30.804 [2024-12-05 12:57:30.472463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:30.804 [2024-12-05 12:57:30.472473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:30.804 [2024-12-05 12:57:30.472490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:30.804 [2024-12-05 12:57:30.472507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:30.804 [2024-12-05 12:57:30.472523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:30.804 [2024-12-05 12:57:30.472531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:30.804 [2024-12-05 12:57:30.472546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:30.804 [2024-12-05 12:57:30.472556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:30.804 [2024-12-05 12:57:30.472564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:30.804 [2024-12-05 12:57:30.472572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:30.804 [2024-12-05 12:57:30.472580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:30.804 [2024-12-05 12:57:30.472587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:30.804 [2024-12-05 12:57:30.472606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:30.804 [2024-12-05 12:57:30.472614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:30.804 [2024-12-05 12:57:30.472630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.804 [2024-12-05 12:57:30.472645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:30.804 [2024-12-05 12:57:30.472653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.804 [2024-12-05 12:57:30.472669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:30.804 [2024-12-05 12:57:30.472677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.804 [2024-12-05 12:57:30.472696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:30.804 [2024-12-05 12:57:30.472704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.804 [2024-12-05 12:57:30.472720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:30.804 [2024-12-05 12:57:30.472727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:30.804 [2024-12-05 12:57:30.472742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:30.804 [2024-12-05 12:57:30.472750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:30.804 [2024-12-05 12:57:30.472757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:30.804 [2024-12-05 12:57:30.472765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:30.804 [2024-12-05 12:57:30.472772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:30.804 [2024-12-05 12:57:30.472779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.804 [2024-12-05 12:57:30.472787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:30.804 [2024-12-05 12:57:30.472794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:30.804 [2024-12-05 12:57:30.472801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.804 [2024-12-05 12:57:30.473124] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:30.804 [2024-12-05 12:57:30.473192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:30.804 [2024-12-05 12:57:30.473218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:30.804 [2024-12-05 12:57:30.473238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.804 [2024-12-05 12:57:30.473305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:30.804 [2024-12-05 12:57:30.473325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:30.804 [2024-12-05 12:57:30.473374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:30.804 [2024-12-05 12:57:30.473406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:30.804 [2024-12-05 12:57:30.473425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:30.804 [2024-12-05 12:57:30.473479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:30.804 [2024-12-05 12:57:30.473504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:30.804 [2024-12-05 12:57:30.473536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.804 [2024-12-05 12:57:30.473591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:30.804 [2024-12-05 12:57:30.473643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:30.804 [2024-12-05 12:57:30.473673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:30.804 [2024-12-05 12:57:30.473728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:30.804 [2024-12-05 12:57:30.473778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:30.804 [2024-12-05 12:57:30.473833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:30.804 [2024-12-05 12:57:30.473930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:30.804 [2024-12-05 12:57:30.473963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:30.804 [2024-12-05 12:57:30.473991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:30.804 [2024-12-05 12:57:30.474048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:30.804 [2024-12-05 12:57:30.474093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:30.804 [2024-12-05 12:57:30.474121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:30.804 [2024-12-05 12:57:30.474151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:30.804 [2024-12-05 12:57:30.474214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:30.804 [2024-12-05 12:57:30.474295] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:30.804 [2024-12-05 12:57:30.474326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.804 [2024-12-05 12:57:30.474356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:30.804 [2024-12-05 12:57:30.474385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:30.804 [2024-12-05 12:57:30.474450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:30.804 [2024-12-05 12:57:30.474482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:30.804 [2024-12-05 12:57:30.474517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.804 [2024-12-05 12:57:30.474537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:30.804 [2024-12-05 12:57:30.474558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:26:30.804 [2024-12-05 12:57:30.474580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.804 [2024-12-05 12:57:30.486284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.804 [2024-12-05 12:57:30.486479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:30.804 [2024-12-05 12:57:30.486543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.588 ms 00:26:30.804 [2024-12-05 12:57:30.486567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.804 [2024-12-05 12:57:30.486682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.804 [2024-12-05 12:57:30.486710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:30.804 [2024-12-05 12:57:30.486760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:30.804 [2024-12-05 12:57:30.486783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.804 [2024-12-05 12:57:30.507627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.804 [2024-12-05 12:57:30.507910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:30.804 [2024-12-05 12:57:30.508002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.738 ms 00:26:30.804 [2024-12-05 12:57:30.508031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.804 [2024-12-05 12:57:30.508133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.804 [2024-12-05 12:57:30.508312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:30.805 [2024-12-05 12:57:30.508341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:30.805 [2024-12-05 12:57:30.508363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.508892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.509010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:30.805 [2024-12-05 12:57:30.509073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:26:30.805 [2024-12-05 12:57:30.509104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.509282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.509317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:30.805 [2024-12-05 12:57:30.509377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:26:30.805 [2024-12-05 12:57:30.509454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.516130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.516286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:30.805 [2024-12-05 12:57:30.516350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.630 ms 00:26:30.805 [2024-12-05 12:57:30.516379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.519154] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:30.805 [2024-12-05 12:57:30.519306] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:30.805 [2024-12-05 12:57:30.519388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.519536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:30.805 [2024-12-05 12:57:30.519564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:26:30.805 [2024-12-05 12:57:30.519587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.534248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.534480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:30.805 [2024-12-05 12:57:30.534500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.602 ms 00:26:30.805 [2024-12-05 12:57:30.534509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.537520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.537682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:30.805 [2024-12-05 12:57:30.537701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:26:30.805 [2024-12-05 12:57:30.537711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.539150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.539182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:30.805 [2024-12-05 12:57:30.539191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:26:30.805 [2024-12-05 12:57:30.539199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.539588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.539606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:30.805 [2024-12-05 12:57:30.539616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:26:30.805 [2024-12-05 12:57:30.539629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.558162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.558242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:30.805 [2024-12-05 12:57:30.558256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.506 ms 00:26:30.805 [2024-12-05 12:57:30.558264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.566570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:30.805 [2024-12-05 12:57:30.570087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.570130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:30.805 [2024-12-05 12:57:30.570144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.768 ms 00:26:30.805 [2024-12-05 12:57:30.570160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.570276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.570287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:30.805 [2024-12-05 12:57:30.570297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:30.805 [2024-12-05 12:57:30.570305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.570378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.570393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:30.805 [2024-12-05 12:57:30.570407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:30.805 [2024-12-05 12:57:30.570415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.570436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.570445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:30.805 [2024-12-05 12:57:30.570453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:30.805 [2024-12-05 12:57:30.570460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.570495] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:30.805 [2024-12-05 12:57:30.570508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.570519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:30.805 [2024-12-05 12:57:30.570530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:30.805 [2024-12-05 12:57:30.570537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.574195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.574232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:30.805 [2024-12-05 12:57:30.574242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:26:30.805 [2024-12-05 12:57:30.574251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.574320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.805 [2024-12-05 12:57:30.574329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:30.805 [2024-12-05 12:57:30.574338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:30.805 [2024-12-05 12:57:30.574354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.805 [2024-12-05 12:57:30.575472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.739 ms, result 0 00:26:32.218  [2024-12-05T12:57:33.013Z] Copying: 41/1024 [MB] (41 MBps) [2024-12-05T12:57:33.948Z] Copying: 87/1024 [MB] (45 MBps) [2024-12-05T12:57:34.879Z] Copying: 131/1024 [MB] (44 MBps) [2024-12-05T12:57:35.849Z] Copying: 174/1024 [MB] (42 MBps) [2024-12-05T12:57:36.783Z] Copying: 216/1024 [MB] (42 MBps) [2024-12-05T12:57:38.156Z] Copying: 260/1024 [MB] (43 MBps) [2024-12-05T12:57:39.118Z] Copying: 305/1024 [MB] (45 MBps) [2024-12-05T12:57:40.053Z] Copying: 350/1024 [MB] (45 MBps) [2024-12-05T12:57:40.981Z] Copying: 394/1024 [MB] (44 MBps) [2024-12-05T12:57:41.910Z] Copying: 431/1024 [MB] (36 MBps) [2024-12-05T12:57:42.841Z] Copying: 462/1024 [MB] (31 MBps) [2024-12-05T12:57:43.773Z] Copying: 492/1024 [MB] (30 MBps) [2024-12-05T12:57:45.146Z] Copying: 526/1024 [MB] (33 MBps) [2024-12-05T12:57:46.079Z] Copying: 556/1024 [MB] (29 MBps) [2024-12-05T12:57:47.010Z] Copying: 595/1024 [MB] (39 MBps) [2024-12-05T12:57:47.942Z] Copying: 625/1024 [MB] (30 MBps) [2024-12-05T12:57:48.911Z] Copying: 651/1024 [MB] (26 MBps) [2024-12-05T12:57:49.844Z] Copying: 694/1024 [MB] (42 MBps) [2024-12-05T12:57:50.779Z] Copying: 728/1024 [MB] (34 MBps) [2024-12-05T12:57:52.151Z] Copying: 762/1024 [MB] (33 MBps) [2024-12-05T12:57:53.083Z] Copying: 801/1024 [MB] (39 MBps) [2024-12-05T12:57:54.016Z] Copying: 842/1024 [MB] (41 MBps) [2024-12-05T12:57:54.949Z] Copying: 889/1024 [MB] (46 MBps) [2024-12-05T12:57:55.890Z] Copying: 934/1024 [MB] (45 MBps) [2024-12-05T12:57:56.926Z] Copying: 983/1024 [MB] (48 MBps) [2024-12-05T12:57:58.821Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-12-05 12:57:58.549857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.969 [2024-12-05 12:57:58.549933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:58.969 [2024-12-05 12:57:58.549971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:58.969 [2024-12-05 12:57:58.549981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.969 [2024-12-05 12:57:58.550007] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:58.969 [2024-12-05 12:57:58.550571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.969 [2024-12-05 12:57:58.550591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:58.969 [2024-12-05 12:57:58.550600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:26:58.969 [2024-12-05 12:57:58.550608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.969 [2024-12-05 12:57:58.550869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.969 [2024-12-05 12:57:58.550881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:58.969 [2024-12-05 12:57:58.550891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:26:58.969 [2024-12-05 12:57:58.550903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.969 [2024-12-05 12:57:58.554697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.969 [2024-12-05 12:57:58.554748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:58.969 [2024-12-05 12:57:58.554760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.776 ms 00:26:58.969 [2024-12-05 12:57:58.554768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.969 [2024-12-05 12:57:58.561084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.969 [2024-12-05 12:57:58.561125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:58.969 [2024-12-05 12:57:58.561145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.291 ms 00:26:58.969 [2024-12-05 12:57:58.561157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.969 [2024-12-05 12:57:58.562768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.969 [2024-12-05 12:57:58.562804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:58.969 [2024-12-05 12:57:58.562828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:26:58.969 [2024-12-05 12:57:58.562836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.969 [2024-12-05 12:57:58.565956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.969 [2024-12-05 12:57:58.565990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:58.969 [2024-12-05 12:57:58.566000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:26:58.969 [2024-12-05 12:57:58.566008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.969 [2024-12-05 12:57:58.566126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.969 [2024-12-05 12:57:58.566136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:58.969 [2024-12-05 12:57:58.566153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:26:58.970 [2024-12-05 12:57:58.566168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.970 [2024-12-05 12:57:58.568080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.970 [2024-12-05 12:57:58.568224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:58.970 [2024-12-05 12:57:58.568241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:26:58.970 [2024-12-05 12:57:58.568250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.970 [2024-12-05 12:57:58.569516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.970 [2024-12-05 12:57:58.569551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:58.970 [2024-12-05 12:57:58.569559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:26:58.970 [2024-12-05 12:57:58.569567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.970 [2024-12-05 12:57:58.570433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.970 [2024-12-05 12:57:58.570464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:58.970 [2024-12-05 12:57:58.570473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:26:58.970 [2024-12-05 12:57:58.570480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.970 [2024-12-05 12:57:58.571307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.970 [2024-12-05 12:57:58.571423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:58.970 [2024-12-05 12:57:58.571437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:26:58.970 [2024-12-05 12:57:58.571444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.970 [2024-12-05 12:57:58.571461] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:58.970 [2024-12-05 12:57:58.571476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.571996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:58.970 [2024-12-05 12:57:58.572287] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:58.970 [2024-12-05 12:57:58.572295] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10d2bf2b-14bb-46ff-9a7a-1879f8c5a96d 00:26:58.970 [2024-12-05 12:57:58.572303] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:58.970 [2024-12-05 12:57:58.572310] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:58.970 [2024-12-05 12:57:58.572318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:58.970 [2024-12-05 12:57:58.572326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:58.971 [2024-12-05 12:57:58.572333] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:58.971 [2024-12-05 12:57:58.572340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:58.971 [2024-12-05 12:57:58.572364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:58.971 [2024-12-05 12:57:58.572376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:58.971 [2024-12-05 12:57:58.572383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:58.971 [2024-12-05 12:57:58.572390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.971 [2024-12-05 12:57:58.572398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:58.971 [2024-12-05 12:57:58.572407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:26:58.971 [2024-12-05 12:57:58.572414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.574309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.971 [2024-12-05 12:57:58.574332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:58.971 [2024-12-05 12:57:58.574342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:26:58.971 [2024-12-05 12:57:58.574351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.574447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.971 [2024-12-05 12:57:58.574456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:58.971 [2024-12-05 12:57:58.574466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:58.971 [2024-12-05 12:57:58.574474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.580727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.580871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:58.971 [2024-12-05 12:57:58.580994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.581031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.581144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.581214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:58.971 [2024-12-05 12:57:58.581261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.581284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.581362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.581419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:58.971 [2024-12-05 12:57:58.581478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.581490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.581512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.581521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:58.971 [2024-12-05 12:57:58.581528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.581536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.593238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.593293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:58.971 [2024-12-05 12:57:58.593304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.593312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.602412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.602465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:58.971 [2024-12-05 12:57:58.602477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.602485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.602546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.602556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:58.971 [2024-12-05 12:57:58.602564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.602572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.602599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.602613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:58.971 [2024-12-05 12:57:58.602621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.602629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.602705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.602715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:58.971 [2024-12-05 12:57:58.602723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.602731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.602763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.602773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:58.971 [2024-12-05 12:57:58.602783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.602790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.602850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.602860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:58.971 [2024-12-05 12:57:58.602869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.602876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.602922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.971 [2024-12-05 12:57:58.602934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:58.971 [2024-12-05 12:57:58.602942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.971 [2024-12-05 12:57:58.602950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.971 [2024-12-05 12:57:58.603079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.193 ms, result 0 00:26:59.536 00:26:59.536 00:26:59.536 12:57:59 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:02.163 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:02.163 12:58:01 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:02.163 [2024-12-05 12:58:01.603143] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:27:02.163 [2024-12-05 12:58:01.603278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89115 ] 00:27:02.163 [2024-12-05 12:58:01.761318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.163 [2024-12-05 12:58:01.785950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.163 [2024-12-05 12:58:01.891199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:02.163 [2024-12-05 12:58:01.891280] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:02.421 [2024-12-05 12:58:02.045671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.045896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:02.421 [2024-12-05 12:58:02.045918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:02.421 [2024-12-05 12:58:02.045934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.045992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.046003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:02.421 [2024-12-05 12:58:02.046012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:02.421 [2024-12-05 12:58:02.046020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.046042] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:02.421 [2024-12-05 12:58:02.046291] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:02.421 [2024-12-05 12:58:02.046305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.046316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:02.421 [2024-12-05 12:58:02.046327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:27:02.421 [2024-12-05 12:58:02.046335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.047657] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:02.421 [2024-12-05 12:58:02.050291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.050325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:02.421 [2024-12-05 12:58:02.050341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:27:02.421 [2024-12-05 12:58:02.050352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.050406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.050415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:02.421 [2024-12-05 12:58:02.050427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:02.421 [2024-12-05 12:58:02.050438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.056774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.056822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:02.421 [2024-12-05 12:58:02.056835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.285 ms 00:27:02.421 [2024-12-05 12:58:02.056843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.056931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.056941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:02.421 [2024-12-05 12:58:02.056950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:02.421 [2024-12-05 12:58:02.056958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.057009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.057020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:02.421 [2024-12-05 12:58:02.057028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:02.421 [2024-12-05 12:58:02.057042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.057064] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:02.421 [2024-12-05 12:58:02.058722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.058752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:02.421 [2024-12-05 12:58:02.058761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:27:02.421 [2024-12-05 12:58:02.058769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.058802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.058827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:02.421 [2024-12-05 12:58:02.058837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:02.421 [2024-12-05 12:58:02.058847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.058866] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:02.421 [2024-12-05 12:58:02.058891] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:02.421 [2024-12-05 12:58:02.058926] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:02.421 [2024-12-05 12:58:02.058945] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:02.421 [2024-12-05 12:58:02.059054] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:02.421 [2024-12-05 12:58:02.059064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:02.421 [2024-12-05 12:58:02.059077] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:02.421 [2024-12-05 12:58:02.059091] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:02.421 [2024-12-05 12:58:02.059100] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:02.421 [2024-12-05 12:58:02.059108] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:02.421 [2024-12-05 12:58:02.059119] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:02.421 [2024-12-05 12:58:02.059126] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:02.421 [2024-12-05 12:58:02.059133] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:02.421 [2024-12-05 12:58:02.059141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.059149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:02.421 [2024-12-05 12:58:02.059157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:27:02.421 [2024-12-05 12:58:02.059164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.059250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.421 [2024-12-05 12:58:02.059258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:02.421 [2024-12-05 12:58:02.059266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:02.421 [2024-12-05 12:58:02.059272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.421 [2024-12-05 12:58:02.059386] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:02.421 [2024-12-05 12:58:02.059397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:02.421 [2024-12-05 12:58:02.059407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:02.421 [2024-12-05 12:58:02.059424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.421 [2024-12-05 12:58:02.059433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:02.421 [2024-12-05 12:58:02.059441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:02.421 [2024-12-05 12:58:02.059449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:02.421 [2024-12-05 12:58:02.059458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:02.421 [2024-12-05 12:58:02.059466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:02.421 [2024-12-05 12:58:02.059474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:02.421 [2024-12-05 12:58:02.059482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:02.421 [2024-12-05 12:58:02.059491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:02.421 [2024-12-05 12:58:02.059498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:02.422 [2024-12-05 12:58:02.059506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:02.422 [2024-12-05 12:58:02.059514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:02.422 [2024-12-05 12:58:02.059522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:02.422 [2024-12-05 12:58:02.059537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:02.422 [2024-12-05 12:58:02.059544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:02.422 [2024-12-05 12:58:02.059560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:02.422 [2024-12-05 12:58:02.059575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:02.422 [2024-12-05 12:58:02.059583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:02.422 [2024-12-05 12:58:02.059598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:02.422 [2024-12-05 12:58:02.059607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:02.422 [2024-12-05 12:58:02.059627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:02.422 [2024-12-05 12:58:02.059635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:02.422 [2024-12-05 12:58:02.059650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:02.422 [2024-12-05 12:58:02.059658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:02.422 [2024-12-05 12:58:02.059673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:02.422 [2024-12-05 12:58:02.059680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:02.422 [2024-12-05 12:58:02.059688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:02.422 [2024-12-05 12:58:02.059696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:02.422 [2024-12-05 12:58:02.059703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:02.422 [2024-12-05 12:58:02.059711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:02.422 [2024-12-05 12:58:02.059725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:02.422 [2024-12-05 12:58:02.059732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059741] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:02.422 [2024-12-05 12:58:02.059750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:02.422 [2024-12-05 12:58:02.059757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:02.422 [2024-12-05 12:58:02.059764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:02.422 [2024-12-05 12:58:02.059771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:02.422 [2024-12-05 12:58:02.059778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:02.422 [2024-12-05 12:58:02.059785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:02.422 [2024-12-05 12:58:02.059791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:02.422 [2024-12-05 12:58:02.059797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:02.422 [2024-12-05 12:58:02.059803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:02.422 [2024-12-05 12:58:02.059824] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:02.422 [2024-12-05 12:58:02.059833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:02.422 [2024-12-05 12:58:02.059842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:02.422 [2024-12-05 12:58:02.059849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:02.422 [2024-12-05 12:58:02.059856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:02.422 [2024-12-05 12:58:02.059864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:02.422 [2024-12-05 12:58:02.059874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:02.422 [2024-12-05 12:58:02.059881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:02.422 [2024-12-05 12:58:02.059889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:02.422 [2024-12-05 12:58:02.059896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:02.422 [2024-12-05 12:58:02.059903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:02.422 [2024-12-05 12:58:02.059911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:02.422 [2024-12-05 12:58:02.059918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:02.422 [2024-12-05 12:58:02.059925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:02.422 [2024-12-05 12:58:02.059933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:02.422 [2024-12-05 12:58:02.059940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:02.422 [2024-12-05 12:58:02.059947] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:02.422 [2024-12-05 12:58:02.059959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:02.422 [2024-12-05 12:58:02.059968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:02.422 [2024-12-05 12:58:02.059975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:02.422 [2024-12-05 12:58:02.059983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:02.422 [2024-12-05 12:58:02.059990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:02.422 [2024-12-05 12:58:02.060000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.422 [2024-12-05 12:58:02.060008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:02.422 [2024-12-05 12:58:02.060015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:27:02.422 [2024-12-05 12:58:02.060025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.422 [2024-12-05 12:58:02.072215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.422 [2024-12-05 12:58:02.072263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:02.422 [2024-12-05 12:58:02.072276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.147 ms 00:27:02.422 [2024-12-05 12:58:02.072285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.422 [2024-12-05 12:58:02.072374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.422 [2024-12-05 12:58:02.072384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:02.422 [2024-12-05 12:58:02.072392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:02.422 [2024-12-05 12:58:02.072406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.422 [2024-12-05 12:58:02.099319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.422 [2024-12-05 12:58:02.099664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:02.422 [2024-12-05 12:58:02.099715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.849 ms 00:27:02.422 [2024-12-05 12:58:02.099741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.422 [2024-12-05 12:58:02.099886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.422 [2024-12-05 12:58:02.099918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:02.422 [2024-12-05 12:58:02.099947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:02.422 [2024-12-05 12:58:02.099970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.422 [2024-12-05 12:58:02.100688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.422 [2024-12-05 12:58:02.100763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:02.422 [2024-12-05 12:58:02.100791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:27:02.422 [2024-12-05 12:58:02.100839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.422 [2024-12-05 12:58:02.101192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.422 [2024-12-05 12:58:02.101230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:02.422 [2024-12-05 12:58:02.101255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:27:02.422 [2024-12-05 12:58:02.101276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.422 [2024-12-05 12:58:02.108204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.422 [2024-12-05 12:58:02.108233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:02.422 [2024-12-05 12:58:02.108243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:27:02.422 [2024-12-05 12:58:02.108251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.422 [2024-12-05 12:58:02.111012] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:02.422 [2024-12-05 12:58:02.111046] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:02.422 [2024-12-05 12:58:02.111061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.111069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:02.423 [2024-12-05 12:58:02.111078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:27:02.423 [2024-12-05 12:58:02.111085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.125486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.125537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:02.423 [2024-12-05 12:58:02.125548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.364 ms 00:27:02.423 [2024-12-05 12:58:02.125557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.127296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.127326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:02.423 [2024-12-05 12:58:02.127335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:27:02.423 [2024-12-05 12:58:02.127342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.128690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.128720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:02.423 [2024-12-05 12:58:02.128729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:27:02.423 [2024-12-05 12:58:02.128735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.129073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.129089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:02.423 [2024-12-05 12:58:02.129098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:27:02.423 [2024-12-05 12:58:02.129105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.147006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.147067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:02.423 [2024-12-05 12:58:02.147080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.878 ms 00:27:02.423 [2024-12-05 12:58:02.147089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.154883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:02.423 [2024-12-05 12:58:02.157950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.158097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:02.423 [2024-12-05 12:58:02.158115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.815 ms 00:27:02.423 [2024-12-05 12:58:02.158124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.158201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.158212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:02.423 [2024-12-05 12:58:02.158221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:02.423 [2024-12-05 12:58:02.158229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.158306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.158322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:02.423 [2024-12-05 12:58:02.158331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:02.423 [2024-12-05 12:58:02.158339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.158359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.158372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:02.423 [2024-12-05 12:58:02.158380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:02.423 [2024-12-05 12:58:02.158387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.158422] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:02.423 [2024-12-05 12:58:02.158433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.158450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:02.423 [2024-12-05 12:58:02.158458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:02.423 [2024-12-05 12:58:02.158466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.161969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.162002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:02.423 [2024-12-05 12:58:02.162014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.487 ms 00:27:02.423 [2024-12-05 12:58:02.162030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.162101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.423 [2024-12-05 12:58:02.162111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:02.423 [2024-12-05 12:58:02.162123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:02.423 [2024-12-05 12:58:02.162132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.423 [2024-12-05 12:58:02.163197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.091 ms, result 0 00:27:03.354  [2024-12-05T12:58:04.577Z] Copying: 45/1024 [MB] (45 MBps) [2024-12-05T12:58:05.510Z] Copying: 94/1024 [MB] (49 MBps) [2024-12-05T12:58:06.441Z] Copying: 139/1024 [MB] (45 MBps) [2024-12-05T12:58:07.377Z] Copying: 185/1024 [MB] (45 MBps) [2024-12-05T12:58:08.371Z] Copying: 231/1024 [MB] (45 MBps) [2024-12-05T12:58:09.305Z] Copying: 275/1024 [MB] (44 MBps) [2024-12-05T12:58:10.238Z] Copying: 321/1024 [MB] (45 MBps) [2024-12-05T12:58:11.678Z] Copying: 365/1024 [MB] (43 MBps) [2024-12-05T12:58:12.243Z] Copying: 408/1024 [MB] (43 MBps) [2024-12-05T12:58:13.177Z] Copying: 453/1024 [MB] (45 MBps) [2024-12-05T12:58:14.550Z] Copying: 503/1024 [MB] (49 MBps) [2024-12-05T12:58:15.497Z] Copying: 546/1024 [MB] (42 MBps) [2024-12-05T12:58:16.429Z] Copying: 591/1024 [MB] (44 MBps) [2024-12-05T12:58:17.362Z] Copying: 634/1024 [MB] (42 MBps) [2024-12-05T12:58:18.293Z] Copying: 677/1024 [MB] (43 MBps) [2024-12-05T12:58:19.228Z] Copying: 723/1024 [MB] (45 MBps) [2024-12-05T12:58:20.601Z] Copying: 767/1024 [MB] (43 MBps) [2024-12-05T12:58:21.532Z] Copying: 811/1024 [MB] (44 MBps) [2024-12-05T12:58:22.461Z] Copying: 856/1024 [MB] (44 MBps) [2024-12-05T12:58:23.392Z] Copying: 897/1024 [MB] (40 MBps) [2024-12-05T12:58:24.324Z] Copying: 939/1024 [MB] (42 MBps) [2024-12-05T12:58:25.264Z] Copying: 983/1024 [MB] (44 MBps) [2024-12-05T12:58:26.193Z] Copying: 1023/1024 [MB] (39 MBps) [2024-12-05T12:58:26.193Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-12-05 12:58:26.148521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.341 [2024-12-05 12:58:26.148724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:26.341 [2024-12-05 12:58:26.148749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:26.341 [2024-12-05 12:58:26.148758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.341 [2024-12-05 12:58:26.151765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:26.341 [2024-12-05 12:58:26.154433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.341 [2024-12-05 12:58:26.154477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:26.341 [2024-12-05 12:58:26.154488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.492 ms 00:27:26.341 [2024-12-05 12:58:26.154496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.341 [2024-12-05 12:58:26.165852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.341 [2024-12-05 12:58:26.165892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:26.341 [2024-12-05 12:58:26.165907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.379 ms 00:27:26.341 [2024-12-05 12:58:26.165915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.341 [2024-12-05 12:58:26.183327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.341 [2024-12-05 12:58:26.183471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:26.341 [2024-12-05 12:58:26.183490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.396 ms 00:27:26.341 [2024-12-05 12:58:26.183510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.341 [2024-12-05 12:58:26.189630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.341 [2024-12-05 12:58:26.189659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:26.341 [2024-12-05 12:58:26.189670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:27:26.341 [2024-12-05 12:58:26.189679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.341 [2024-12-05 12:58:26.191008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.341 [2024-12-05 12:58:26.191036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:26.341 [2024-12-05 12:58:26.191044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:27:26.341 [2024-12-05 12:58:26.191052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.602 [2024-12-05 12:58:26.194409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.602 [2024-12-05 12:58:26.194450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:26.602 [2024-12-05 12:58:26.194488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.327 ms 00:27:26.602 [2024-12-05 12:58:26.194501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.602 [2024-12-05 12:58:26.243339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.602 [2024-12-05 12:58:26.243405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:26.602 [2024-12-05 12:58:26.243434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.790 ms 00:27:26.602 [2024-12-05 12:58:26.243443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.602 [2024-12-05 12:58:26.245324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.602 [2024-12-05 12:58:26.245451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:26.602 [2024-12-05 12:58:26.245467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.865 ms 00:27:26.602 [2024-12-05 12:58:26.245475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.602 [2024-12-05 12:58:26.246467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.602 [2024-12-05 12:58:26.246493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:26.602 [2024-12-05 12:58:26.246501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:27:26.602 [2024-12-05 12:58:26.246509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.602 [2024-12-05 12:58:26.247339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.602 [2024-12-05 12:58:26.247368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:26.602 [2024-12-05 12:58:26.247377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:27:26.602 [2024-12-05 12:58:26.247384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.602 [2024-12-05 12:58:26.248211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.602 [2024-12-05 12:58:26.248240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:26.602 [2024-12-05 12:58:26.248249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:27:26.602 [2024-12-05 12:58:26.248255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.602 [2024-12-05 12:58:26.248280] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:26.602 [2024-12-05 12:58:26.248294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121344 / 261120 wr_cnt: 1 state: open 00:27:26.602 [2024-12-05 12:58:26.248305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:26.602 [2024-12-05 12:58:26.248766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.248774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.248781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.248790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.248798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.248998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:26.603 [2024-12-05 12:58:26.249759] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:26.603 [2024-12-05 12:58:26.249768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10d2bf2b-14bb-46ff-9a7a-1879f8c5a96d 00:27:26.603 [2024-12-05 12:58:26.249786] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121344 00:27:26.603 [2024-12-05 12:58:26.249793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122304 00:27:26.603 [2024-12-05 12:58:26.249800] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121344 00:27:26.603 [2024-12-05 12:58:26.249819] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:27:26.603 [2024-12-05 12:58:26.249827] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:26.603 [2024-12-05 12:58:26.249835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:26.603 [2024-12-05 12:58:26.249843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:26.603 [2024-12-05 12:58:26.249857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:26.603 [2024-12-05 12:58:26.249863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:26.603 [2024-12-05 12:58:26.249871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.603 [2024-12-05 12:58:26.249879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:26.603 [2024-12-05 12:58:26.249894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.592 ms 00:27:26.603 [2024-12-05 12:58:26.249901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.251682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.603 [2024-12-05 12:58:26.251703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:26.603 [2024-12-05 12:58:26.251712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:27:26.603 [2024-12-05 12:58:26.251720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.251830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.603 [2024-12-05 12:58:26.251840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:26.603 [2024-12-05 12:58:26.251852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:27:26.603 [2024-12-05 12:58:26.251860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.257771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.257889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:26.603 [2024-12-05 12:58:26.257937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.257960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.258026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.258055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:26.603 [2024-12-05 12:58:26.258109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.258131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.258231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.258264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:26.603 [2024-12-05 12:58:26.258285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.258336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.258447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.258481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:26.603 [2024-12-05 12:58:26.258571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.258597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.269735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.269910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:26.603 [2024-12-05 12:58:26.269960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.269983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.278615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.278770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:26.603 [2024-12-05 12:58:26.278838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.278871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.279062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.279147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:26.603 [2024-12-05 12:58:26.279195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.279217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.279305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.279330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:26.603 [2024-12-05 12:58:26.279350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.279403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.279486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.279496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:26.603 [2024-12-05 12:58:26.279505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.279513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.279546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.279555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:26.603 [2024-12-05 12:58:26.279568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.603 [2024-12-05 12:58:26.279575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.603 [2024-12-05 12:58:26.279617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.603 [2024-12-05 12:58:26.279630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:26.603 [2024-12-05 12:58:26.279637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.604 [2024-12-05 12:58:26.279645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.604 [2024-12-05 12:58:26.279688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.604 [2024-12-05 12:58:26.279699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:26.604 [2024-12-05 12:58:26.279707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.604 [2024-12-05 12:58:26.279714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.604 [2024-12-05 12:58:26.279864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 134.104 ms, result 0 00:27:30.790 00:27:30.790 00:27:30.790 12:58:30 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:30.790 [2024-12-05 12:58:30.417851] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:27:30.790 [2024-12-05 12:58:30.417990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89380 ] 00:27:30.790 [2024-12-05 12:58:30.580571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.790 [2024-12-05 12:58:30.605131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.048 [2024-12-05 12:58:30.708219] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:31.048 [2024-12-05 12:58:30.708295] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:31.048 [2024-12-05 12:58:30.864162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.864222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:31.048 [2024-12-05 12:58:30.864237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:31.048 [2024-12-05 12:58:30.864246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.864293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.864303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:31.048 [2024-12-05 12:58:30.864312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:31.048 [2024-12-05 12:58:30.864320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.864345] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:31.048 [2024-12-05 12:58:30.864586] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:31.048 [2024-12-05 12:58:30.864604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.864611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:31.048 [2024-12-05 12:58:30.864622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:27:31.048 [2024-12-05 12:58:30.864630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.865986] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:31.048 [2024-12-05 12:58:30.868415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.868450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:31.048 [2024-12-05 12:58:30.868466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:27:31.048 [2024-12-05 12:58:30.868477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.868532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.868544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:31.048 [2024-12-05 12:58:30.868552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:31.048 [2024-12-05 12:58:30.868559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.874903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.874937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:31.048 [2024-12-05 12:58:30.874950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.291 ms 00:27:31.048 [2024-12-05 12:58:30.874958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.875056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.875065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:31.048 [2024-12-05 12:58:30.875074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:31.048 [2024-12-05 12:58:30.875084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.875129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.875138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:31.048 [2024-12-05 12:58:30.875146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:31.048 [2024-12-05 12:58:30.875159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.875180] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:31.048 [2024-12-05 12:58:30.876832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.876866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:31.048 [2024-12-05 12:58:30.876875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:27:31.048 [2024-12-05 12:58:30.876882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.048 [2024-12-05 12:58:30.876914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.048 [2024-12-05 12:58:30.876923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:31.048 [2024-12-05 12:58:30.876932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:31.049 [2024-12-05 12:58:30.876942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.049 [2024-12-05 12:58:30.876966] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:31.049 [2024-12-05 12:58:30.876989] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:31.049 [2024-12-05 12:58:30.877028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:31.049 [2024-12-05 12:58:30.877044] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:31.049 [2024-12-05 12:58:30.877151] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:31.049 [2024-12-05 12:58:30.877161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:31.049 [2024-12-05 12:58:30.877174] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:31.049 [2024-12-05 12:58:30.877183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877195] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877204] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:31.049 [2024-12-05 12:58:30.877215] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:31.049 [2024-12-05 12:58:30.877222] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:31.049 [2024-12-05 12:58:30.877232] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:31.049 [2024-12-05 12:58:30.877244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.049 [2024-12-05 12:58:30.877251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:31.049 [2024-12-05 12:58:30.877259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:27:31.049 [2024-12-05 12:58:30.877274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.049 [2024-12-05 12:58:30.877358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.049 [2024-12-05 12:58:30.877366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:31.049 [2024-12-05 12:58:30.877377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:31.049 [2024-12-05 12:58:30.877384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.049 [2024-12-05 12:58:30.877490] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:31.049 [2024-12-05 12:58:30.877504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:31.049 [2024-12-05 12:58:30.877522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:31.049 [2024-12-05 12:58:30.877556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:31.049 [2024-12-05 12:58:30.877581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:31.049 [2024-12-05 12:58:30.877600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:31.049 [2024-12-05 12:58:30.877607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:31.049 [2024-12-05 12:58:30.877615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:31.049 [2024-12-05 12:58:30.877623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:31.049 [2024-12-05 12:58:30.877630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:31.049 [2024-12-05 12:58:30.877638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:31.049 [2024-12-05 12:58:30.877654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:31.049 [2024-12-05 12:58:30.877678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:31.049 [2024-12-05 12:58:30.877701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:31.049 [2024-12-05 12:58:30.877725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:31.049 [2024-12-05 12:58:30.877748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:31.049 [2024-12-05 12:58:30.877770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:31.049 [2024-12-05 12:58:30.877786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:31.049 [2024-12-05 12:58:30.877793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:31.049 [2024-12-05 12:58:30.877801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:31.049 [2024-12-05 12:58:30.877829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:31.049 [2024-12-05 12:58:30.877838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:31.049 [2024-12-05 12:58:30.877845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:31.049 [2024-12-05 12:58:30.877861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:31.049 [2024-12-05 12:58:30.877871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877879] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:31.049 [2024-12-05 12:58:30.877891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:31.049 [2024-12-05 12:58:30.877899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.049 [2024-12-05 12:58:30.877917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:31.049 [2024-12-05 12:58:30.877924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:31.049 [2024-12-05 12:58:30.877932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:31.049 [2024-12-05 12:58:30.877939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:31.049 [2024-12-05 12:58:30.877946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:31.049 [2024-12-05 12:58:30.877952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:31.049 [2024-12-05 12:58:30.877961] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:31.049 [2024-12-05 12:58:30.877970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.049 [2024-12-05 12:58:30.877979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:31.049 [2024-12-05 12:58:30.877986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:31.049 [2024-12-05 12:58:30.877993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:31.049 [2024-12-05 12:58:30.878003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:31.049 [2024-12-05 12:58:30.878010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:31.049 [2024-12-05 12:58:30.878017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:31.049 [2024-12-05 12:58:30.878024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:31.049 [2024-12-05 12:58:30.878031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:31.049 [2024-12-05 12:58:30.878038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:31.049 [2024-12-05 12:58:30.878045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:31.049 [2024-12-05 12:58:30.878052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:31.049 [2024-12-05 12:58:30.878059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:31.049 [2024-12-05 12:58:30.878066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:31.049 [2024-12-05 12:58:30.878073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:31.049 [2024-12-05 12:58:30.878081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:31.049 [2024-12-05 12:58:30.878090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.049 [2024-12-05 12:58:30.878099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:31.049 [2024-12-05 12:58:30.878106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:31.049 [2024-12-05 12:58:30.878113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:31.049 [2024-12-05 12:58:30.878122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:31.050 [2024-12-05 12:58:30.878129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.050 [2024-12-05 12:58:30.878140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:31.050 [2024-12-05 12:58:30.878148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:27:31.050 [2024-12-05 12:58:30.878157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.050 [2024-12-05 12:58:30.889641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.050 [2024-12-05 12:58:30.889680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.050 [2024-12-05 12:58:30.889695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.437 ms 00:27:31.050 [2024-12-05 12:58:30.889704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.050 [2024-12-05 12:58:30.889786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.050 [2024-12-05 12:58:30.889795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:31.050 [2024-12-05 12:58:30.889803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:31.050 [2024-12-05 12:58:30.889827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.910393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.910450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.309 [2024-12-05 12:58:30.910467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.513 ms 00:27:31.309 [2024-12-05 12:58:30.910478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.910532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.910546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.309 [2024-12-05 12:58:30.910559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:31.309 [2024-12-05 12:58:30.910577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.911108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.911147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.309 [2024-12-05 12:58:30.911161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:27:31.309 [2024-12-05 12:58:30.911173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.911358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.911384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.309 [2024-12-05 12:58:30.911397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:27:31.309 [2024-12-05 12:58:30.911409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.918745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.918784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.309 [2024-12-05 12:58:30.918797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.304 ms 00:27:31.309 [2024-12-05 12:58:30.918843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.921957] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:31.309 [2024-12-05 12:58:30.922001] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:31.309 [2024-12-05 12:58:30.922021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.922032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:31.309 [2024-12-05 12:58:30.922044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:27:31.309 [2024-12-05 12:58:30.922054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.936431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.936565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:31.309 [2024-12-05 12:58:30.936589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.329 ms 00:27:31.309 [2024-12-05 12:58:30.936601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.938250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.938278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:31.309 [2024-12-05 12:58:30.938287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:27:31.309 [2024-12-05 12:58:30.938294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.939603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.939716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:31.309 [2024-12-05 12:58:30.939730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.278 ms 00:27:31.309 [2024-12-05 12:58:30.939738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.309 [2024-12-05 12:58:30.940087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.309 [2024-12-05 12:58:30.940106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:31.310 [2024-12-05 12:58:30.940115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:27:31.310 [2024-12-05 12:58:30.940123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.957712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.310 [2024-12-05 12:58:30.957772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:31.310 [2024-12-05 12:58:30.957785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.566 ms 00:27:31.310 [2024-12-05 12:58:30.957794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.965412] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:31.310 [2024-12-05 12:58:30.968646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.310 [2024-12-05 12:58:30.968680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:31.310 [2024-12-05 12:58:30.968693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.792 ms 00:27:31.310 [2024-12-05 12:58:30.968707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.968792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.310 [2024-12-05 12:58:30.968802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:31.310 [2024-12-05 12:58:30.968829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:31.310 [2024-12-05 12:58:30.968837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.970594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.310 [2024-12-05 12:58:30.970726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:31.310 [2024-12-05 12:58:30.970741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:27:31.310 [2024-12-05 12:58:30.970750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.970783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.310 [2024-12-05 12:58:30.970797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:31.310 [2024-12-05 12:58:30.970823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:31.310 [2024-12-05 12:58:30.970831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.970870] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:31.310 [2024-12-05 12:58:30.970880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.310 [2024-12-05 12:58:30.970888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:31.310 [2024-12-05 12:58:30.970899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:31.310 [2024-12-05 12:58:30.970906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.975038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.310 [2024-12-05 12:58:30.975078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:31.310 [2024-12-05 12:58:30.975088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.115 ms 00:27:31.310 [2024-12-05 12:58:30.975099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.975171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.310 [2024-12-05 12:58:30.975180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:31.310 [2024-12-05 12:58:30.975188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:31.310 [2024-12-05 12:58:30.975200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.310 [2024-12-05 12:58:30.976562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.959 ms, result 0 00:27:32.703  [2024-12-05T12:58:33.487Z] Copying: 42/1024 [MB] (42 MBps) [2024-12-05T12:58:34.420Z] Copying: 86/1024 [MB] (44 MBps) [2024-12-05T12:58:35.477Z] Copying: 132/1024 [MB] (45 MBps) [2024-12-05T12:58:36.408Z] Copying: 179/1024 [MB] (46 MBps) [2024-12-05T12:58:37.342Z] Copying: 226/1024 [MB] (46 MBps) [2024-12-05T12:58:38.393Z] Copying: 273/1024 [MB] (47 MBps) [2024-12-05T12:58:39.325Z] Copying: 319/1024 [MB] (46 MBps) [2024-12-05T12:58:40.256Z] Copying: 363/1024 [MB] (43 MBps) [2024-12-05T12:58:41.186Z] Copying: 405/1024 [MB] (42 MBps) [2024-12-05T12:58:42.560Z] Copying: 447/1024 [MB] (41 MBps) [2024-12-05T12:58:43.493Z] Copying: 493/1024 [MB] (45 MBps) [2024-12-05T12:58:44.426Z] Copying: 540/1024 [MB] (47 MBps) [2024-12-05T12:58:45.359Z] Copying: 587/1024 [MB] (46 MBps) [2024-12-05T12:58:46.355Z] Copying: 623/1024 [MB] (36 MBps) [2024-12-05T12:58:47.287Z] Copying: 647/1024 [MB] (23 MBps) [2024-12-05T12:58:48.220Z] Copying: 683/1024 [MB] (35 MBps) [2024-12-05T12:58:49.595Z] Copying: 727/1024 [MB] (43 MBps) [2024-12-05T12:58:50.162Z] Copying: 773/1024 [MB] (45 MBps) [2024-12-05T12:58:51.535Z] Copying: 819/1024 [MB] (46 MBps) [2024-12-05T12:58:52.468Z] Copying: 865/1024 [MB] (45 MBps) [2024-12-05T12:58:53.402Z] Copying: 914/1024 [MB] (48 MBps) [2024-12-05T12:58:54.335Z] Copying: 959/1024 [MB] (45 MBps) [2024-12-05T12:58:54.593Z] Copying: 1009/1024 [MB] (50 MBps) [2024-12-05T12:58:54.853Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-12-05 12:58:54.714461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.714553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:55.001 [2024-12-05 12:58:54.714575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:55.001 [2024-12-05 12:58:54.714587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.714621] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:55.001 [2024-12-05 12:58:54.715309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.715336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:55.001 [2024-12-05 12:58:54.715357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:27:55.001 [2024-12-05 12:58:54.715370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.715709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.715733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:55.001 [2024-12-05 12:58:54.715747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:27:55.001 [2024-12-05 12:58:54.715760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.724668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.724708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:55.001 [2024-12-05 12:58:54.724729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.884 ms 00:27:55.001 [2024-12-05 12:58:54.724741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.731241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.731407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:55.001 [2024-12-05 12:58:54.731424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.468 ms 00:27:55.001 [2024-12-05 12:58:54.731434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.732992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.733020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:55.001 [2024-12-05 12:58:54.733030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.505 ms 00:27:55.001 [2024-12-05 12:58:54.733038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.736012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.736046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:55.001 [2024-12-05 12:58:54.736057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:27:55.001 [2024-12-05 12:58:54.736070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.793078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.793156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:55.001 [2024-12-05 12:58:54.793172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.968 ms 00:27:55.001 [2024-12-05 12:58:54.793199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.794987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.795026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:55.001 [2024-12-05 12:58:54.795036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.772 ms 00:27:55.001 [2024-12-05 12:58:54.795045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.796379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.796426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:55.001 [2024-12-05 12:58:54.796438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:27:55.001 [2024-12-05 12:58:54.796446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.797404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.797438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:55.001 [2024-12-05 12:58:54.797449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:27:55.001 [2024-12-05 12:58:54.797456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.798272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.001 [2024-12-05 12:58:54.798303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:55.001 [2024-12-05 12:58:54.798313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:27:55.001 [2024-12-05 12:58:54.798320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.001 [2024-12-05 12:58:54.798346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:55.001 [2024-12-05 12:58:54.798363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:55.001 [2024-12-05 12:58:54.798373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:55.001 [2024-12-05 12:58:54.798381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:55.001 [2024-12-05 12:58:54.798389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:55.001 [2024-12-05 12:58:54.798396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:55.001 [2024-12-05 12:58:54.798404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.798999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.799006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.799015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.799022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.799030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.799037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.799044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.799051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:55.002 [2024-12-05 12:58:54.799058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:55.003 [2024-12-05 12:58:54.799141] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:55.003 [2024-12-05 12:58:54.799149] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 10d2bf2b-14bb-46ff-9a7a-1879f8c5a96d 00:27:55.003 [2024-12-05 12:58:54.799157] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:55.003 [2024-12-05 12:58:54.799174] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 10688 00:27:55.003 [2024-12-05 12:58:54.799181] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 9728 00:27:55.003 [2024-12-05 12:58:54.799189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0987 00:27:55.003 [2024-12-05 12:58:54.799196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:55.003 [2024-12-05 12:58:54.799205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:55.003 [2024-12-05 12:58:54.799218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:55.003 [2024-12-05 12:58:54.799225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:55.003 [2024-12-05 12:58:54.799244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:55.003 [2024-12-05 12:58:54.799251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.003 [2024-12-05 12:58:54.799258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:55.003 [2024-12-05 12:58:54.799266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:27:55.003 [2024-12-05 12:58:54.799273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.801102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.003 [2024-12-05 12:58:54.801224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:55.003 [2024-12-05 12:58:54.801240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:27:55.003 [2024-12-05 12:58:54.801249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.801343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:55.003 [2024-12-05 12:58:54.801352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:55.003 [2024-12-05 12:58:54.801362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:55.003 [2024-12-05 12:58:54.801377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.807285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.807311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:55.003 [2024-12-05 12:58:54.807321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.807330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.807387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.807396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:55.003 [2024-12-05 12:58:54.807404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.807414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.807490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.807501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:55.003 [2024-12-05 12:58:54.807509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.807516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.807532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.807540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:55.003 [2024-12-05 12:58:54.807548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.807555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.818940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.818996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:55.003 [2024-12-05 12:58:54.819008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.819016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.827830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.827880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:55.003 [2024-12-05 12:58:54.827901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.827909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.827995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.828005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:55.003 [2024-12-05 12:58:54.828014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.828022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.828047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.828060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:55.003 [2024-12-05 12:58:54.828068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.828076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.828144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.828156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:55.003 [2024-12-05 12:58:54.828164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.828171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.828200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.828209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:55.003 [2024-12-05 12:58:54.828217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.828225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.828268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.828282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:55.003 [2024-12-05 12:58:54.828291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.828298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.828345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:55.003 [2024-12-05 12:58:54.828355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:55.003 [2024-12-05 12:58:54.828362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:55.003 [2024-12-05 12:58:54.828374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:55.003 [2024-12-05 12:58:54.828501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 114.013 ms, result 0 00:27:55.568 00:27:55.568 00:27:55.568 12:58:55 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:57.478 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88305 00:27:57.478 12:58:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88305 ']' 00:27:57.478 12:58:56 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88305 00:27:57.478 Process with pid 88305 is not found 00:27:57.478 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88305) - No such process 00:27:57.478 12:58:56 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88305 is not found' 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:57.478 Remove shared memory files 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:57.478 12:58:56 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:57.478 ************************************ 00:27:57.478 END TEST ftl_restore 00:27:57.478 ************************************ 00:27:57.478 00:27:57.478 real 2m13.434s 00:27:57.478 user 2m0.984s 00:27:57.478 sys 0m12.629s 00:27:57.478 12:58:56 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.478 12:58:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:57.478 12:58:56 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:57.478 12:58:56 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:57.478 12:58:56 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.478 12:58:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:57.478 ************************************ 00:27:57.478 START TEST ftl_dirty_shutdown 00:27:57.478 ************************************ 00:27:57.478 12:58:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:57.478 * Looking for test storage... 00:27:57.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.478 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:57.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.479 --rc genhtml_branch_coverage=1 00:27:57.479 --rc genhtml_function_coverage=1 00:27:57.479 --rc genhtml_legend=1 00:27:57.479 --rc geninfo_all_blocks=1 00:27:57.479 --rc geninfo_unexecuted_blocks=1 00:27:57.479 00:27:57.479 ' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:57.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.479 --rc genhtml_branch_coverage=1 00:27:57.479 --rc genhtml_function_coverage=1 00:27:57.479 --rc genhtml_legend=1 00:27:57.479 --rc geninfo_all_blocks=1 00:27:57.479 --rc geninfo_unexecuted_blocks=1 00:27:57.479 00:27:57.479 ' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:57.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.479 --rc genhtml_branch_coverage=1 00:27:57.479 --rc genhtml_function_coverage=1 00:27:57.479 --rc genhtml_legend=1 00:27:57.479 --rc geninfo_all_blocks=1 00:27:57.479 --rc geninfo_unexecuted_blocks=1 00:27:57.479 00:27:57.479 ' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:57.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.479 --rc genhtml_branch_coverage=1 00:27:57.479 --rc genhtml_function_coverage=1 00:27:57.479 --rc genhtml_legend=1 00:27:57.479 --rc geninfo_all_blocks=1 00:27:57.479 --rc geninfo_unexecuted_blocks=1 00:27:57.479 00:27:57.479 ' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89720 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89720 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 89720 ']' 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:57.479 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:57.479 [2024-12-05 12:58:57.212558] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:27:57.479 [2024-12-05 12:58:57.212890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89720 ] 00:27:57.772 [2024-12-05 12:58:57.366088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.772 [2024-12-05 12:58:57.390243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.338 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.338 12:58:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:58.338 12:58:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:58.338 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:58.338 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:58.338 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:58.338 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:58.338 12:58:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:58.596 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:58.596 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:58.596 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:58.596 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:58.596 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:58.596 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:58.596 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:58.596 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:58.855 { 00:27:58.855 "name": "nvme0n1", 00:27:58.855 "aliases": [ 00:27:58.855 "2d745712-073c-441b-8d71-46a3da96e484" 00:27:58.855 ], 00:27:58.855 "product_name": "NVMe disk", 00:27:58.855 "block_size": 4096, 00:27:58.855 "num_blocks": 1310720, 00:27:58.855 "uuid": "2d745712-073c-441b-8d71-46a3da96e484", 00:27:58.855 "numa_id": -1, 00:27:58.855 "assigned_rate_limits": { 00:27:58.855 "rw_ios_per_sec": 0, 00:27:58.855 "rw_mbytes_per_sec": 0, 00:27:58.855 "r_mbytes_per_sec": 0, 00:27:58.855 "w_mbytes_per_sec": 0 00:27:58.855 }, 00:27:58.855 "claimed": true, 00:27:58.855 "claim_type": "read_many_write_one", 00:27:58.855 "zoned": false, 00:27:58.855 "supported_io_types": { 00:27:58.855 "read": true, 00:27:58.855 "write": true, 00:27:58.855 "unmap": true, 00:27:58.855 "flush": true, 00:27:58.855 "reset": true, 00:27:58.855 "nvme_admin": true, 00:27:58.855 "nvme_io": true, 00:27:58.855 "nvme_io_md": false, 00:27:58.855 "write_zeroes": true, 00:27:58.855 "zcopy": false, 00:27:58.855 "get_zone_info": false, 00:27:58.855 "zone_management": false, 00:27:58.855 "zone_append": false, 00:27:58.855 "compare": true, 00:27:58.855 "compare_and_write": false, 00:27:58.855 "abort": true, 00:27:58.855 "seek_hole": false, 00:27:58.855 "seek_data": false, 00:27:58.855 "copy": true, 00:27:58.855 "nvme_iov_md": false 00:27:58.855 }, 00:27:58.855 "driver_specific": { 00:27:58.855 "nvme": [ 00:27:58.855 { 00:27:58.855 "pci_address": "0000:00:11.0", 00:27:58.855 "trid": { 00:27:58.855 "trtype": "PCIe", 00:27:58.855 "traddr": "0000:00:11.0" 00:27:58.855 }, 00:27:58.855 "ctrlr_data": { 00:27:58.855 "cntlid": 0, 00:27:58.855 "vendor_id": "0x1b36", 00:27:58.855 "model_number": "QEMU NVMe Ctrl", 00:27:58.855 "serial_number": "12341", 00:27:58.855 "firmware_revision": "8.0.0", 00:27:58.855 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:58.855 "oacs": { 00:27:58.855 "security": 0, 00:27:58.855 "format": 1, 00:27:58.855 "firmware": 0, 00:27:58.855 "ns_manage": 1 00:27:58.855 }, 00:27:58.855 "multi_ctrlr": false, 00:27:58.855 "ana_reporting": false 00:27:58.855 }, 00:27:58.855 "vs": { 00:27:58.855 "nvme_version": "1.4" 00:27:58.855 }, 00:27:58.855 "ns_data": { 00:27:58.855 "id": 1, 00:27:58.855 "can_share": false 00:27:58.855 } 00:27:58.855 } 00:27:58.855 ], 00:27:58.855 "mp_policy": "active_passive" 00:27:58.855 } 00:27:58.855 } 00:27:58.855 ]' 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:58.855 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:59.113 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=0bbf29d8-faad-430a-b7ff-ec3f3421fdd8 00:27:59.113 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:59.113 12:58:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0bbf29d8-faad-430a-b7ff-ec3f3421fdd8 00:27:59.371 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=eb330788-87a8-4a32-980e-b36def85bd75 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u eb330788-87a8-4a32-980e-b36def85bd75 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=d4c82c97-d844-4067-9e4a-1d025015f02f 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d4c82c97-d844-4067-9e4a-1d025015f02f 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=d4c82c97-d844-4067-9e4a-1d025015f02f 00:27:59.628 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size d4c82c97-d844-4067-9e4a-1d025015f02f 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d4c82c97-d844-4067-9e4a-1d025015f02f 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4c82c97-d844-4067-9e4a-1d025015f02f 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:59.885 { 00:27:59.885 "name": "d4c82c97-d844-4067-9e4a-1d025015f02f", 00:27:59.885 "aliases": [ 00:27:59.885 "lvs/nvme0n1p0" 00:27:59.885 ], 00:27:59.885 "product_name": "Logical Volume", 00:27:59.885 "block_size": 4096, 00:27:59.885 "num_blocks": 26476544, 00:27:59.885 "uuid": "d4c82c97-d844-4067-9e4a-1d025015f02f", 00:27:59.885 "assigned_rate_limits": { 00:27:59.885 "rw_ios_per_sec": 0, 00:27:59.885 "rw_mbytes_per_sec": 0, 00:27:59.885 "r_mbytes_per_sec": 0, 00:27:59.885 "w_mbytes_per_sec": 0 00:27:59.885 }, 00:27:59.885 "claimed": false, 00:27:59.885 "zoned": false, 00:27:59.885 "supported_io_types": { 00:27:59.885 "read": true, 00:27:59.885 "write": true, 00:27:59.885 "unmap": true, 00:27:59.885 "flush": false, 00:27:59.885 "reset": true, 00:27:59.885 "nvme_admin": false, 00:27:59.885 "nvme_io": false, 00:27:59.885 "nvme_io_md": false, 00:27:59.885 "write_zeroes": true, 00:27:59.885 "zcopy": false, 00:27:59.885 "get_zone_info": false, 00:27:59.885 "zone_management": false, 00:27:59.885 "zone_append": false, 00:27:59.885 "compare": false, 00:27:59.885 "compare_and_write": false, 00:27:59.885 "abort": false, 00:27:59.885 "seek_hole": true, 00:27:59.885 "seek_data": true, 00:27:59.885 "copy": false, 00:27:59.885 "nvme_iov_md": false 00:27:59.885 }, 00:27:59.885 "driver_specific": { 00:27:59.885 "lvol": { 00:27:59.885 "lvol_store_uuid": "eb330788-87a8-4a32-980e-b36def85bd75", 00:27:59.885 "base_bdev": "nvme0n1", 00:27:59.885 "thin_provision": true, 00:27:59.885 "num_allocated_clusters": 0, 00:27:59.885 "snapshot": false, 00:27:59.885 "clone": false, 00:27:59.885 "esnap_clone": false 00:27:59.885 } 00:27:59.885 } 00:27:59.885 } 00:27:59.885 ]' 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:59.885 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:00.141 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:00.141 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:00.141 12:58:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:00.141 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:00.141 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:00.141 12:58:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size d4c82c97-d844-4067-9e4a-1d025015f02f 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d4c82c97-d844-4067-9e4a-1d025015f02f 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4c82c97-d844-4067-9e4a-1d025015f02f 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:00.398 { 00:28:00.398 "name": "d4c82c97-d844-4067-9e4a-1d025015f02f", 00:28:00.398 "aliases": [ 00:28:00.398 "lvs/nvme0n1p0" 00:28:00.398 ], 00:28:00.398 "product_name": "Logical Volume", 00:28:00.398 "block_size": 4096, 00:28:00.398 "num_blocks": 26476544, 00:28:00.398 "uuid": "d4c82c97-d844-4067-9e4a-1d025015f02f", 00:28:00.398 "assigned_rate_limits": { 00:28:00.398 "rw_ios_per_sec": 0, 00:28:00.398 "rw_mbytes_per_sec": 0, 00:28:00.398 "r_mbytes_per_sec": 0, 00:28:00.398 "w_mbytes_per_sec": 0 00:28:00.398 }, 00:28:00.398 "claimed": false, 00:28:00.398 "zoned": false, 00:28:00.398 "supported_io_types": { 00:28:00.398 "read": true, 00:28:00.398 "write": true, 00:28:00.398 "unmap": true, 00:28:00.398 "flush": false, 00:28:00.398 "reset": true, 00:28:00.398 "nvme_admin": false, 00:28:00.398 "nvme_io": false, 00:28:00.398 "nvme_io_md": false, 00:28:00.398 "write_zeroes": true, 00:28:00.398 "zcopy": false, 00:28:00.398 "get_zone_info": false, 00:28:00.398 "zone_management": false, 00:28:00.398 "zone_append": false, 00:28:00.398 "compare": false, 00:28:00.398 "compare_and_write": false, 00:28:00.398 "abort": false, 00:28:00.398 "seek_hole": true, 00:28:00.398 "seek_data": true, 00:28:00.398 "copy": false, 00:28:00.398 "nvme_iov_md": false 00:28:00.398 }, 00:28:00.398 "driver_specific": { 00:28:00.398 "lvol": { 00:28:00.398 "lvol_store_uuid": "eb330788-87a8-4a32-980e-b36def85bd75", 00:28:00.398 "base_bdev": "nvme0n1", 00:28:00.398 "thin_provision": true, 00:28:00.398 "num_allocated_clusters": 0, 00:28:00.398 "snapshot": false, 00:28:00.398 "clone": false, 00:28:00.398 "esnap_clone": false 00:28:00.398 } 00:28:00.398 } 00:28:00.398 } 00:28:00.398 ]' 00:28:00.398 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:00.655 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:00.655 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:00.655 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:00.655 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:00.655 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:00.655 12:59:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:00.655 12:59:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size d4c82c97-d844-4067-9e4a-1d025015f02f 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d4c82c97-d844-4067-9e4a-1d025015f02f 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4c82c97-d844-4067-9e4a-1d025015f02f 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:00.913 { 00:28:00.913 "name": "d4c82c97-d844-4067-9e4a-1d025015f02f", 00:28:00.913 "aliases": [ 00:28:00.913 "lvs/nvme0n1p0" 00:28:00.913 ], 00:28:00.913 "product_name": "Logical Volume", 00:28:00.913 "block_size": 4096, 00:28:00.913 "num_blocks": 26476544, 00:28:00.913 "uuid": "d4c82c97-d844-4067-9e4a-1d025015f02f", 00:28:00.913 "assigned_rate_limits": { 00:28:00.913 "rw_ios_per_sec": 0, 00:28:00.913 "rw_mbytes_per_sec": 0, 00:28:00.913 "r_mbytes_per_sec": 0, 00:28:00.913 "w_mbytes_per_sec": 0 00:28:00.913 }, 00:28:00.913 "claimed": false, 00:28:00.913 "zoned": false, 00:28:00.913 "supported_io_types": { 00:28:00.913 "read": true, 00:28:00.913 "write": true, 00:28:00.913 "unmap": true, 00:28:00.913 "flush": false, 00:28:00.913 "reset": true, 00:28:00.913 "nvme_admin": false, 00:28:00.913 "nvme_io": false, 00:28:00.913 "nvme_io_md": false, 00:28:00.913 "write_zeroes": true, 00:28:00.913 "zcopy": false, 00:28:00.913 "get_zone_info": false, 00:28:00.913 "zone_management": false, 00:28:00.913 "zone_append": false, 00:28:00.913 "compare": false, 00:28:00.913 "compare_and_write": false, 00:28:00.913 "abort": false, 00:28:00.913 "seek_hole": true, 00:28:00.913 "seek_data": true, 00:28:00.913 "copy": false, 00:28:00.913 "nvme_iov_md": false 00:28:00.913 }, 00:28:00.913 "driver_specific": { 00:28:00.913 "lvol": { 00:28:00.913 "lvol_store_uuid": "eb330788-87a8-4a32-980e-b36def85bd75", 00:28:00.913 "base_bdev": "nvme0n1", 00:28:00.913 "thin_provision": true, 00:28:00.913 "num_allocated_clusters": 0, 00:28:00.913 "snapshot": false, 00:28:00.913 "clone": false, 00:28:00.913 "esnap_clone": false 00:28:00.913 } 00:28:00.913 } 00:28:00.913 } 00:28:00.913 ]' 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:00.913 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d4c82c97-d844-4067-9e4a-1d025015f02f --l2p_dram_limit 10' 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:01.172 12:59:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d4c82c97-d844-4067-9e4a-1d025015f02f --l2p_dram_limit 10 -c nvc0n1p0 00:28:01.172 [2024-12-05 12:59:00.967029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.967281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:01.172 [2024-12-05 12:59:00.967302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:01.172 [2024-12-05 12:59:00.967312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.967387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.967401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.172 [2024-12-05 12:59:00.967408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:01.172 [2024-12-05 12:59:00.967419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.967448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:01.172 [2024-12-05 12:59:00.967720] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:01.172 [2024-12-05 12:59:00.967735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.967744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.172 [2024-12-05 12:59:00.967752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:28:01.172 [2024-12-05 12:59:00.967761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.967837] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9fe379a6-2e0e-4789-9696-553f79e62356 00:28:01.172 [2024-12-05 12:59:00.969252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.969286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:01.172 [2024-12-05 12:59:00.969298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:01.172 [2024-12-05 12:59:00.969305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.976799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.976851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.172 [2024-12-05 12:59:00.976863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.437 ms 00:28:01.172 [2024-12-05 12:59:00.976870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.976954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.976962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.172 [2024-12-05 12:59:00.976971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:01.172 [2024-12-05 12:59:00.976977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.977032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.977040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:01.172 [2024-12-05 12:59:00.977050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:01.172 [2024-12-05 12:59:00.977057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.977081] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:01.172 [2024-12-05 12:59:00.978856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.978885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.172 [2024-12-05 12:59:00.978899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:28:01.172 [2024-12-05 12:59:00.978907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.978941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.172 [2024-12-05 12:59:00.978951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:01.172 [2024-12-05 12:59:00.978958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:01.172 [2024-12-05 12:59:00.978968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.172 [2024-12-05 12:59:00.978984] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:01.172 [2024-12-05 12:59:00.979118] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:01.172 [2024-12-05 12:59:00.979132] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:01.172 [2024-12-05 12:59:00.979147] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:01.172 [2024-12-05 12:59:00.979155] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979168] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979175] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:01.173 [2024-12-05 12:59:00.979185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:01.173 [2024-12-05 12:59:00.979191] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:01.173 [2024-12-05 12:59:00.979199] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:01.173 [2024-12-05 12:59:00.979209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-12-05 12:59:00.979216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:01.173 [2024-12-05 12:59:00.979223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:28:01.173 [2024-12-05 12:59:00.979231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-12-05 12:59:00.979300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-12-05 12:59:00.979310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:01.173 [2024-12-05 12:59:00.979316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:01.173 [2024-12-05 12:59:00.979328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-12-05 12:59:00.979426] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:01.173 [2024-12-05 12:59:00.979437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:01.173 [2024-12-05 12:59:00.979444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:01.173 [2024-12-05 12:59:00.979466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:01.173 [2024-12-05 12:59:00.979484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.173 [2024-12-05 12:59:00.979497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:01.173 [2024-12-05 12:59:00.979505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:01.173 [2024-12-05 12:59:00.979511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:01.173 [2024-12-05 12:59:00.979521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:01.173 [2024-12-05 12:59:00.979528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:01.173 [2024-12-05 12:59:00.979535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:01.173 [2024-12-05 12:59:00.979552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:01.173 [2024-12-05 12:59:00.979574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:01.173 [2024-12-05 12:59:00.979596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:01.173 [2024-12-05 12:59:00.979618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:01.173 [2024-12-05 12:59:00.979642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:01.173 [2024-12-05 12:59:00.979664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.173 [2024-12-05 12:59:00.979679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:01.173 [2024-12-05 12:59:00.979687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:01.173 [2024-12-05 12:59:00.979693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:01.173 [2024-12-05 12:59:00.979701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:01.173 [2024-12-05 12:59:00.979707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:01.173 [2024-12-05 12:59:00.979715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:01.173 [2024-12-05 12:59:00.979729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:01.173 [2024-12-05 12:59:00.979735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979742] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:01.173 [2024-12-05 12:59:00.979750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:01.173 [2024-12-05 12:59:00.979759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:01.173 [2024-12-05 12:59:00.979776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:01.173 [2024-12-05 12:59:00.979782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:01.173 [2024-12-05 12:59:00.979791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:01.173 [2024-12-05 12:59:00.979798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:01.173 [2024-12-05 12:59:00.979824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:01.173 [2024-12-05 12:59:00.979831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:01.173 [2024-12-05 12:59:00.979841] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:01.173 [2024-12-05 12:59:00.979852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.173 [2024-12-05 12:59:00.979862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:01.173 [2024-12-05 12:59:00.979869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:01.173 [2024-12-05 12:59:00.979878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:01.173 [2024-12-05 12:59:00.979887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:01.173 [2024-12-05 12:59:00.979895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:01.173 [2024-12-05 12:59:00.979902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:01.173 [2024-12-05 12:59:00.979912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:01.173 [2024-12-05 12:59:00.979919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:01.173 [2024-12-05 12:59:00.979928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:01.173 [2024-12-05 12:59:00.979935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:01.173 [2024-12-05 12:59:00.979943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:01.173 [2024-12-05 12:59:00.979950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:01.173 [2024-12-05 12:59:00.979958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:01.173 [2024-12-05 12:59:00.979965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:01.173 [2024-12-05 12:59:00.979973] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:01.173 [2024-12-05 12:59:00.979980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:01.173 [2024-12-05 12:59:00.979989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:01.173 [2024-12-05 12:59:00.979995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:01.173 [2024-12-05 12:59:00.980003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:01.173 [2024-12-05 12:59:00.980009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:01.173 [2024-12-05 12:59:00.980016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.173 [2024-12-05 12:59:00.980022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:01.173 [2024-12-05 12:59:00.980032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:28:01.173 [2024-12-05 12:59:00.980041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.173 [2024-12-05 12:59:00.980074] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:01.173 [2024-12-05 12:59:00.980082] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:04.448 [2024-12-05 12:59:03.564246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.564317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:04.448 [2024-12-05 12:59:03.564343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2584.153 ms 00:28:04.448 [2024-12-05 12:59:03.564352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.575539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.575584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:04.448 [2024-12-05 12:59:03.575599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.090 ms 00:28:04.448 [2024-12-05 12:59:03.575608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.575757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.575768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:04.448 [2024-12-05 12:59:03.575779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:04.448 [2024-12-05 12:59:03.575787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.586549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.586592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:04.448 [2024-12-05 12:59:03.586607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.708 ms 00:28:04.448 [2024-12-05 12:59:03.586618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.586658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.586666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:04.448 [2024-12-05 12:59:03.586676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:04.448 [2024-12-05 12:59:03.586684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.587148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.587170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:04.448 [2024-12-05 12:59:03.587183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:28:04.448 [2024-12-05 12:59:03.587196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.587319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.587329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:04.448 [2024-12-05 12:59:03.587340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:04.448 [2024-12-05 12:59:03.587348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.594266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.594298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:04.448 [2024-12-05 12:59:03.594310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.895 ms 00:28:04.448 [2024-12-05 12:59:03.594318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.615448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:04.448 [2024-12-05 12:59:03.619725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.619780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:04.448 [2024-12-05 12:59:03.619802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.336 ms 00:28:04.448 [2024-12-05 12:59:03.619842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.665011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.665206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:04.448 [2024-12-05 12:59:03.665230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.110 ms 00:28:04.448 [2024-12-05 12:59:03.665243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.665437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.665454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:04.448 [2024-12-05 12:59:03.665464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:28:04.448 [2024-12-05 12:59:03.665474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.668601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.668733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:04.448 [2024-12-05 12:59:03.668754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:28:04.448 [2024-12-05 12:59:03.668764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.671078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.671108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:04.448 [2024-12-05 12:59:03.671118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:28:04.448 [2024-12-05 12:59:03.671127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.671428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.671450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:04.448 [2024-12-05 12:59:03.671459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:28:04.448 [2024-12-05 12:59:03.671471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.695429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.695836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:04.448 [2024-12-05 12:59:03.695898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.934 ms 00:28:04.448 [2024-12-05 12:59:03.695930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.704165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.704264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:04.448 [2024-12-05 12:59:03.704296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.106 ms 00:28:04.448 [2024-12-05 12:59:03.704324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.708532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.708564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:04.448 [2024-12-05 12:59:03.708572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.124 ms 00:28:04.448 [2024-12-05 12:59:03.708579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.711355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.711386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:04.448 [2024-12-05 12:59:03.711394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.748 ms 00:28:04.448 [2024-12-05 12:59:03.711404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.711435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.711451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:04.448 [2024-12-05 12:59:03.711459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:04.448 [2024-12-05 12:59:03.711467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.711525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.448 [2024-12-05 12:59:03.711535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:04.448 [2024-12-05 12:59:03.711542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:04.448 [2024-12-05 12:59:03.711552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.448 [2024-12-05 12:59:03.712747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2745.289 ms, result 0 00:28:04.448 { 00:28:04.448 "name": "ftl0", 00:28:04.448 "uuid": "9fe379a6-2e0e-4789-9696-553f79e62356" 00:28:04.448 } 00:28:04.448 12:59:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:04.448 12:59:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:04.448 12:59:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:04.448 12:59:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:04.448 12:59:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:04.448 /dev/nbd0 00:28:04.448 12:59:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:04.448 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:04.448 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:04.448 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:04.448 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:04.448 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:04.449 1+0 records in 00:28:04.449 1+0 records out 00:28:04.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340475 s, 12.0 MB/s 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:04.449 12:59:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:04.449 [2024-12-05 12:59:04.245883] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:28:04.449 [2024-12-05 12:59:04.246019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89852 ] 00:28:04.707 [2024-12-05 12:59:04.404870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.707 [2024-12-05 12:59:04.431631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.137  [2024-12-05T12:59:06.556Z] Copying: 191/1024 [MB] (191 MBps) [2024-12-05T12:59:07.926Z] Copying: 387/1024 [MB] (195 MBps) [2024-12-05T12:59:08.856Z] Copying: 602/1024 [MB] (214 MBps) [2024-12-05T12:59:09.421Z] Copying: 850/1024 [MB] (248 MBps) [2024-12-05T12:59:09.421Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:28:09.569 00:28:09.569 12:59:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:12.177 12:59:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:12.177 [2024-12-05 12:59:11.656390] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:28:12.177 [2024-12-05 12:59:11.656532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89934 ] 00:28:12.177 [2024-12-05 12:59:11.816699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.177 [2024-12-05 12:59:11.841634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.109  [2024-12-05T12:59:14.333Z] Copying: 30/1024 [MB] (30 MBps) [2024-12-05T12:59:15.274Z] Copying: 60/1024 [MB] (30 MBps) [2024-12-05T12:59:16.208Z] Copying: 89/1024 [MB] (28 MBps) [2024-12-05T12:59:17.141Z] Copying: 115/1024 [MB] (25 MBps) [2024-12-05T12:59:18.105Z] Copying: 144/1024 [MB] (29 MBps) [2024-12-05T12:59:19.038Z] Copying: 172/1024 [MB] (27 MBps) [2024-12-05T12:59:19.970Z] Copying: 201/1024 [MB] (28 MBps) [2024-12-05T12:59:21.342Z] Copying: 230/1024 [MB] (29 MBps) [2024-12-05T12:59:22.274Z] Copying: 257/1024 [MB] (27 MBps) [2024-12-05T12:59:23.248Z] Copying: 286/1024 [MB] (28 MBps) [2024-12-05T12:59:24.182Z] Copying: 313/1024 [MB] (27 MBps) [2024-12-05T12:59:25.116Z] Copying: 341/1024 [MB] (28 MBps) [2024-12-05T12:59:26.128Z] Copying: 371/1024 [MB] (29 MBps) [2024-12-05T12:59:27.062Z] Copying: 403/1024 [MB] (31 MBps) [2024-12-05T12:59:27.997Z] Copying: 433/1024 [MB] (30 MBps) [2024-12-05T12:59:28.931Z] Copying: 462/1024 [MB] (28 MBps) [2024-12-05T12:59:30.339Z] Copying: 491/1024 [MB] (28 MBps) [2024-12-05T12:59:31.276Z] Copying: 520/1024 [MB] (29 MBps) [2024-12-05T12:59:32.208Z] Copying: 546/1024 [MB] (26 MBps) [2024-12-05T12:59:33.139Z] Copying: 576/1024 [MB] (29 MBps) [2024-12-05T12:59:34.071Z] Copying: 607/1024 [MB] (31 MBps) [2024-12-05T12:59:35.004Z] Copying: 632168/1048576 [kB] (9752 kBps) [2024-12-05T12:59:35.952Z] Copying: 635104/1048576 [kB] (2936 kBps) [2024-12-05T12:59:37.320Z] Copying: 651/1024 [MB] (31 MBps) [2024-12-05T12:59:38.250Z] Copying: 685/1024 [MB] (33 MBps) [2024-12-05T12:59:39.181Z] Copying: 715/1024 [MB] (30 MBps) [2024-12-05T12:59:40.114Z] Copying: 743/1024 [MB] (28 MBps) [2024-12-05T12:59:41.049Z] Copying: 771/1024 [MB] (28 MBps) [2024-12-05T12:59:42.034Z] Copying: 801/1024 [MB] (29 MBps) [2024-12-05T12:59:42.968Z] Copying: 832/1024 [MB] (30 MBps) [2024-12-05T12:59:43.948Z] Copying: 861/1024 [MB] (29 MBps) [2024-12-05T12:59:45.319Z] Copying: 890/1024 [MB] (29 MBps) [2024-12-05T12:59:46.250Z] Copying: 917/1024 [MB] (26 MBps) [2024-12-05T12:59:47.181Z] Copying: 940/1024 [MB] (22 MBps) [2024-12-05T12:59:48.134Z] Copying: 965/1024 [MB] (25 MBps) [2024-12-05T12:59:49.066Z] Copying: 991/1024 [MB] (25 MBps) [2024-12-05T12:59:50.002Z] Copying: 1008/1024 [MB] (17 MBps) [2024-12-05T12:59:50.002Z] Copying: 1024/1024 [MB] (average 27 MBps) 00:28:50.150 00:28:50.150 12:59:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:50.150 12:59:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:50.409 12:59:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:50.668 [2024-12-05 12:59:50.357056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.668 [2024-12-05 12:59:50.357321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:50.668 [2024-12-05 12:59:50.357353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:50.668 [2024-12-05 12:59:50.357362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.668 [2024-12-05 12:59:50.357397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:50.668 [2024-12-05 12:59:50.358025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.668 [2024-12-05 12:59:50.358049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:50.668 [2024-12-05 12:59:50.358064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:28:50.668 [2024-12-05 12:59:50.358076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.668 [2024-12-05 12:59:50.359595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.668 [2024-12-05 12:59:50.359624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:50.668 [2024-12-05 12:59:50.359635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:28:50.668 [2024-12-05 12:59:50.359645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.668 [2024-12-05 12:59:50.375329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.668 [2024-12-05 12:59:50.375395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:50.668 [2024-12-05 12:59:50.375411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.664 ms 00:28:50.668 [2024-12-05 12:59:50.375421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.668 [2024-12-05 12:59:50.381647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.669 [2024-12-05 12:59:50.381683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:50.669 [2024-12-05 12:59:50.381695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.190 ms 00:28:50.669 [2024-12-05 12:59:50.381705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.669 [2024-12-05 12:59:50.383281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.669 [2024-12-05 12:59:50.383322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:50.669 [2024-12-05 12:59:50.383331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:28:50.669 [2024-12-05 12:59:50.383341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.669 [2024-12-05 12:59:50.387181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.669 [2024-12-05 12:59:50.387219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:50.669 [2024-12-05 12:59:50.387229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.808 ms 00:28:50.669 [2024-12-05 12:59:50.387239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.669 [2024-12-05 12:59:50.387368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.669 [2024-12-05 12:59:50.387380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:50.669 [2024-12-05 12:59:50.387389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:28:50.669 [2024-12-05 12:59:50.387398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.669 [2024-12-05 12:59:50.389071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.669 [2024-12-05 12:59:50.389103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:50.669 [2024-12-05 12:59:50.389112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:28:50.669 [2024-12-05 12:59:50.389121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.669 [2024-12-05 12:59:50.390270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.669 [2024-12-05 12:59:50.390306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:50.669 [2024-12-05 12:59:50.390315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:28:50.669 [2024-12-05 12:59:50.390323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.669 [2024-12-05 12:59:50.391103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.669 [2024-12-05 12:59:50.391135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:50.669 [2024-12-05 12:59:50.391144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:28:50.669 [2024-12-05 12:59:50.391153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.669 [2024-12-05 12:59:50.392006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.669 [2024-12-05 12:59:50.392137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:50.669 [2024-12-05 12:59:50.392151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:28:50.669 [2024-12-05 12:59:50.392160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.669 [2024-12-05 12:59:50.392190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:50.669 [2024-12-05 12:59:50.392209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:50.669 [2024-12-05 12:59:50.392730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.392739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.392746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.392756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.392763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.392774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.392782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.392792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.392799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:50.670 [2024-12-05 12:59:50.393829] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:50.670 [2024-12-05 12:59:50.393840] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fe379a6-2e0e-4789-9696-553f79e62356 00:28:50.670 [2024-12-05 12:59:50.393857] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:50.670 [2024-12-05 12:59:50.393864] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:50.670 [2024-12-05 12:59:50.393874] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:50.670 [2024-12-05 12:59:50.393885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:50.670 [2024-12-05 12:59:50.393895] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:50.670 [2024-12-05 12:59:50.393902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:50.670 [2024-12-05 12:59:50.393915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:50.670 [2024-12-05 12:59:50.393922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:50.670 [2024-12-05 12:59:50.393931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:50.670 [2024-12-05 12:59:50.393940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.670 [2024-12-05 12:59:50.393950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:50.670 [2024-12-05 12:59:50.393962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:28:50.670 [2024-12-05 12:59:50.393971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.395788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.670 [2024-12-05 12:59:50.395918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:50.670 [2024-12-05 12:59:50.395949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.788 ms 00:28:50.670 [2024-12-05 12:59:50.395971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.396094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:50.670 [2024-12-05 12:59:50.396126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:50.670 [2024-12-05 12:59:50.396147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:50.670 [2024-12-05 12:59:50.396212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.402649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.402764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:50.670 [2024-12-05 12:59:50.402832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.402893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.402973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.403000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:50.670 [2024-12-05 12:59:50.403035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.403056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.403156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.403188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:50.670 [2024-12-05 12:59:50.403208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.403229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.403264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.403292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:50.670 [2024-12-05 12:59:50.403313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.403333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.415200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.415367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:50.670 [2024-12-05 12:59:50.415418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.415444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.424932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.425107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:50.670 [2024-12-05 12:59:50.425165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.425190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.425293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.425394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:50.670 [2024-12-05 12:59:50.425465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.425486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.425545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.425571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:50.670 [2024-12-05 12:59:50.425591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.425615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.425725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.425774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:50.670 [2024-12-05 12:59:50.425794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.670 [2024-12-05 12:59:50.425873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.670 [2024-12-05 12:59:50.425921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.670 [2024-12-05 12:59:50.425978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:50.671 [2024-12-05 12:59:50.425998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.671 [2024-12-05 12:59:50.426019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.671 [2024-12-05 12:59:50.426074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.671 [2024-12-05 12:59:50.426140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:50.671 [2024-12-05 12:59:50.426163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.671 [2024-12-05 12:59:50.426185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.671 [2024-12-05 12:59:50.426246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:50.671 [2024-12-05 12:59:50.426374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:50.671 [2024-12-05 12:59:50.426406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:50.671 [2024-12-05 12:59:50.426431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:50.671 [2024-12-05 12:59:50.426630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.541 ms, result 0 00:28:50.671 true 00:28:50.671 12:59:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89720 00:28:50.671 12:59:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89720 00:28:50.671 12:59:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:50.671 [2024-12-05 12:59:50.513429] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:28:50.671 [2024-12-05 12:59:50.513765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90338 ] 00:28:50.929 [2024-12-05 12:59:50.667496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.929 [2024-12-05 12:59:50.692678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.935  [2024-12-05T12:59:53.159Z] Copying: 216/1024 [MB] (216 MBps) [2024-12-05T12:59:54.137Z] Copying: 409/1024 [MB] (193 MBps) [2024-12-05T12:59:55.073Z] Copying: 600/1024 [MB] (191 MBps) [2024-12-05T12:59:56.007Z] Copying: 789/1024 [MB] (188 MBps) [2024-12-05T12:59:56.265Z] Copying: 977/1024 [MB] (187 MBps) [2024-12-05T12:59:56.265Z] Copying: 1024/1024 [MB] (average 194 MBps) 00:28:56.413 00:28:56.413 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89720 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:56.413 12:59:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:56.670 [2024-12-05 12:59:56.278611] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:28:56.670 [2024-12-05 12:59:56.278744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90402 ] 00:28:56.670 [2024-12-05 12:59:56.437413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.670 [2024-12-05 12:59:56.463578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.926 [2024-12-05 12:59:56.569802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:56.926 [2024-12-05 12:59:56.569902] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:56.926 [2024-12-05 12:59:56.633241] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:56.926 [2024-12-05 12:59:56.633643] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:56.926 [2024-12-05 12:59:56.633891] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:57.184 [2024-12-05 12:59:56.819432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.819658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:57.185 [2024-12-05 12:59:56.819680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:57.185 [2024-12-05 12:59:56.819695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.819756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.819767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:57.185 [2024-12-05 12:59:56.819776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:57.185 [2024-12-05 12:59:56.819785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.819825] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:57.185 [2024-12-05 12:59:56.820095] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:57.185 [2024-12-05 12:59:56.820111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.820119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:57.185 [2024-12-05 12:59:56.820131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:28:57.185 [2024-12-05 12:59:56.820139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.821552] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:57.185 [2024-12-05 12:59:56.824289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.824325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:57.185 [2024-12-05 12:59:56.824336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.739 ms 00:28:57.185 [2024-12-05 12:59:56.824344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.824401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.824412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:57.185 [2024-12-05 12:59:56.824420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:57.185 [2024-12-05 12:59:56.824430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.831070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.831105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:57.185 [2024-12-05 12:59:56.831115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.588 ms 00:28:57.185 [2024-12-05 12:59:56.831124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.831222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.831233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:57.185 [2024-12-05 12:59:56.831241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:57.185 [2024-12-05 12:59:56.831252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.831290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.831303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:57.185 [2024-12-05 12:59:56.831317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:57.185 [2024-12-05 12:59:56.831324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.831346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:57.185 [2024-12-05 12:59:56.833148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.833176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:57.185 [2024-12-05 12:59:56.833190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:28:57.185 [2024-12-05 12:59:56.833200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.833236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.833245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:57.185 [2024-12-05 12:59:56.833254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:57.185 [2024-12-05 12:59:56.833261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.833293] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:57.185 [2024-12-05 12:59:56.833318] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:57.185 [2024-12-05 12:59:56.833356] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:57.185 [2024-12-05 12:59:56.833375] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:57.185 [2024-12-05 12:59:56.833479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:57.185 [2024-12-05 12:59:56.833490] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:57.185 [2024-12-05 12:59:56.833502] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:57.185 [2024-12-05 12:59:56.833512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:57.185 [2024-12-05 12:59:56.833521] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:57.185 [2024-12-05 12:59:56.833530] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:57.185 [2024-12-05 12:59:56.833537] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:57.185 [2024-12-05 12:59:56.833545] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:57.185 [2024-12-05 12:59:56.833556] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:57.185 [2024-12-05 12:59:56.833563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.833571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:57.185 [2024-12-05 12:59:56.833579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:28:57.185 [2024-12-05 12:59:56.833586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.833702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.185 [2024-12-05 12:59:56.833716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:57.185 [2024-12-05 12:59:56.833728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:28:57.185 [2024-12-05 12:59:56.833746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.185 [2024-12-05 12:59:56.833910] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:57.185 [2024-12-05 12:59:56.833928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:57.185 [2024-12-05 12:59:56.833939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:57.185 [2024-12-05 12:59:56.833948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.185 [2024-12-05 12:59:56.833957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:57.185 [2024-12-05 12:59:56.833965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:57.185 [2024-12-05 12:59:56.833974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:57.185 [2024-12-05 12:59:56.833984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:57.185 [2024-12-05 12:59:56.833992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:57.185 [2024-12-05 12:59:56.834000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:57.185 [2024-12-05 12:59:56.834008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:57.185 [2024-12-05 12:59:56.834017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:57.185 [2024-12-05 12:59:56.834032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:57.185 [2024-12-05 12:59:56.834040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:57.185 [2024-12-05 12:59:56.834049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:57.185 [2024-12-05 12:59:56.834056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.185 [2024-12-05 12:59:56.834064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:57.185 [2024-12-05 12:59:56.834071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:57.185 [2024-12-05 12:59:56.834079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.185 [2024-12-05 12:59:56.834087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:57.185 [2024-12-05 12:59:56.834094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:57.185 [2024-12-05 12:59:56.834102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.185 [2024-12-05 12:59:56.834109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:57.186 [2024-12-05 12:59:56.834117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:57.186 [2024-12-05 12:59:56.834125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.186 [2024-12-05 12:59:56.834132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:57.186 [2024-12-05 12:59:56.834140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:57.186 [2024-12-05 12:59:56.834159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.186 [2024-12-05 12:59:56.834172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:57.186 [2024-12-05 12:59:56.834180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:57.186 [2024-12-05 12:59:56.834187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:57.186 [2024-12-05 12:59:56.834195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:57.186 [2024-12-05 12:59:56.834202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:57.186 [2024-12-05 12:59:56.834208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:57.186 [2024-12-05 12:59:56.834215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:57.186 [2024-12-05 12:59:56.834222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:57.186 [2024-12-05 12:59:56.834229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:57.186 [2024-12-05 12:59:56.834235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:57.186 [2024-12-05 12:59:56.834242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:57.186 [2024-12-05 12:59:56.834248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.186 [2024-12-05 12:59:56.834255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:57.186 [2024-12-05 12:59:56.834262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:57.186 [2024-12-05 12:59:56.834270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.186 [2024-12-05 12:59:56.834277] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:57.186 [2024-12-05 12:59:56.834290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:57.186 [2024-12-05 12:59:56.834298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:57.186 [2024-12-05 12:59:56.834306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:57.186 [2024-12-05 12:59:56.834314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:57.186 [2024-12-05 12:59:56.834321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:57.186 [2024-12-05 12:59:56.834327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:57.186 [2024-12-05 12:59:56.834334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:57.186 [2024-12-05 12:59:56.834341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:57.186 [2024-12-05 12:59:56.834348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:57.186 [2024-12-05 12:59:56.834356] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:57.186 [2024-12-05 12:59:56.834365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:57.186 [2024-12-05 12:59:56.834374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:57.186 [2024-12-05 12:59:56.834381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:57.186 [2024-12-05 12:59:56.834388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:57.186 [2024-12-05 12:59:56.834395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:57.186 [2024-12-05 12:59:56.834405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:57.186 [2024-12-05 12:59:56.834415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:57.186 [2024-12-05 12:59:56.834423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:57.186 [2024-12-05 12:59:56.834430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:57.186 [2024-12-05 12:59:56.834437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:57.186 [2024-12-05 12:59:56.834445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:57.186 [2024-12-05 12:59:56.834452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:57.186 [2024-12-05 12:59:56.834459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:57.186 [2024-12-05 12:59:56.834465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:57.186 [2024-12-05 12:59:56.834473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:57.186 [2024-12-05 12:59:56.834481] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:57.186 [2024-12-05 12:59:56.834492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:57.186 [2024-12-05 12:59:56.834500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:57.186 [2024-12-05 12:59:56.834508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:57.186 [2024-12-05 12:59:56.834516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:57.186 [2024-12-05 12:59:56.834524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:57.186 [2024-12-05 12:59:56.834532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.186 [2024-12-05 12:59:56.834541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:57.186 [2024-12-05 12:59:56.834549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:28:57.186 [2024-12-05 12:59:56.834556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.186 [2024-12-05 12:59:56.846796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.186 [2024-12-05 12:59:56.846961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:57.186 [2024-12-05 12:59:56.847021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.188 ms 00:28:57.186 [2024-12-05 12:59:56.847046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.186 [2024-12-05 12:59:56.847254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.186 [2024-12-05 12:59:56.847304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:57.186 [2024-12-05 12:59:56.847356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:57.186 [2024-12-05 12:59:56.847379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.186 [2024-12-05 12:59:56.873881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.186 [2024-12-05 12:59:56.874194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:57.186 [2024-12-05 12:59:56.874223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.420 ms 00:28:57.186 [2024-12-05 12:59:56.874233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.186 [2024-12-05 12:59:56.874301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.186 [2024-12-05 12:59:56.874313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:57.186 [2024-12-05 12:59:56.874322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:57.186 [2024-12-05 12:59:56.874330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.186 [2024-12-05 12:59:56.874799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.874840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:57.187 [2024-12-05 12:59:56.874858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:28:57.187 [2024-12-05 12:59:56.874867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.875017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.875033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:57.187 [2024-12-05 12:59:56.875044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:28:57.187 [2024-12-05 12:59:56.875053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.881907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.881947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:57.187 [2024-12-05 12:59:56.881962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.831 ms 00:28:57.187 [2024-12-05 12:59:56.881978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.884884] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:57.187 [2024-12-05 12:59:56.885038] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:57.187 [2024-12-05 12:59:56.885058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.885068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:57.187 [2024-12-05 12:59:56.885076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.959 ms 00:28:57.187 [2024-12-05 12:59:56.885084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.899986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.900155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:57.187 [2024-12-05 12:59:56.900173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.863 ms 00:28:57.187 [2024-12-05 12:59:56.900182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.902059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.902093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:57.187 [2024-12-05 12:59:56.902102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:28:57.187 [2024-12-05 12:59:56.902111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.903592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.903711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:57.187 [2024-12-05 12:59:56.903725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:28:57.187 [2024-12-05 12:59:56.903733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.904142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.904155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:57.187 [2024-12-05 12:59:56.904164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:28:57.187 [2024-12-05 12:59:56.904172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.922924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.922988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:57.187 [2024-12-05 12:59:56.923002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.731 ms 00:28:57.187 [2024-12-05 12:59:56.923022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.931052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:57.187 [2024-12-05 12:59:56.934260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.934407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:57.187 [2024-12-05 12:59:56.934424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.178 ms 00:28:57.187 [2024-12-05 12:59:56.934433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.934527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.934540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:57.187 [2024-12-05 12:59:56.934552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:57.187 [2024-12-05 12:59:56.934560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.934641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.934653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:57.187 [2024-12-05 12:59:56.934662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:57.187 [2024-12-05 12:59:56.934670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.934691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.934700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:57.187 [2024-12-05 12:59:56.934708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:57.187 [2024-12-05 12:59:56.934724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.934761] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:57.187 [2024-12-05 12:59:56.934772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.934782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:57.187 [2024-12-05 12:59:56.934794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:57.187 [2024-12-05 12:59:56.934803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.938938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.938974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:57.187 [2024-12-05 12:59:56.938985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.098 ms 00:28:57.187 [2024-12-05 12:59:56.938994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.939069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:57.187 [2024-12-05 12:59:56.939085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:57.187 [2024-12-05 12:59:56.939098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:57.187 [2024-12-05 12:59:56.939106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.187 [2024-12-05 12:59:56.940166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 120.269 ms, result 0 00:28:58.117  [2024-12-05T12:59:59.337Z] Copying: 37/1024 [MB] (37 MBps) [2024-12-05T13:00:00.270Z] Copying: 82/1024 [MB] (44 MBps) [2024-12-05T13:00:01.211Z] Copying: 126/1024 [MB] (44 MBps) [2024-12-05T13:00:02.144Z] Copying: 171/1024 [MB] (44 MBps) [2024-12-05T13:00:03.083Z] Copying: 215/1024 [MB] (44 MBps) [2024-12-05T13:00:04.021Z] Copying: 242/1024 [MB] (27 MBps) [2024-12-05T13:00:04.961Z] Copying: 271/1024 [MB] (29 MBps) [2024-12-05T13:00:06.387Z] Copying: 304/1024 [MB] (32 MBps) [2024-12-05T13:00:06.956Z] Copying: 329/1024 [MB] (24 MBps) [2024-12-05T13:00:08.341Z] Copying: 356/1024 [MB] (27 MBps) [2024-12-05T13:00:09.281Z] Copying: 384/1024 [MB] (27 MBps) [2024-12-05T13:00:10.215Z] Copying: 399/1024 [MB] (15 MBps) [2024-12-05T13:00:11.150Z] Copying: 436/1024 [MB] (36 MBps) [2024-12-05T13:00:12.091Z] Copying: 468/1024 [MB] (31 MBps) [2024-12-05T13:00:13.043Z] Copying: 512/1024 [MB] (44 MBps) [2024-12-05T13:00:13.974Z] Copying: 558/1024 [MB] (45 MBps) [2024-12-05T13:00:15.343Z] Copying: 602/1024 [MB] (44 MBps) [2024-12-05T13:00:16.273Z] Copying: 644/1024 [MB] (42 MBps) [2024-12-05T13:00:17.213Z] Copying: 688/1024 [MB] (43 MBps) [2024-12-05T13:00:18.158Z] Copying: 732/1024 [MB] (44 MBps) [2024-12-05T13:00:19.092Z] Copying: 777/1024 [MB] (44 MBps) [2024-12-05T13:00:20.024Z] Copying: 821/1024 [MB] (44 MBps) [2024-12-05T13:00:20.958Z] Copying: 868/1024 [MB] (46 MBps) [2024-12-05T13:00:22.340Z] Copying: 911/1024 [MB] (43 MBps) [2024-12-05T13:00:23.310Z] Copying: 954/1024 [MB] (42 MBps) [2024-12-05T13:00:24.244Z] Copying: 999/1024 [MB] (44 MBps) [2024-12-05T13:00:24.812Z] Copying: 1023/1024 [MB] (24 MBps) [2024-12-05T13:00:24.812Z] Copying: 1024/1024 [MB] (average 37 MBps)[2024-12-05 13:00:24.618608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.618675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:24.960 [2024-12-05 13:00:24.618690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:24.960 [2024-12-05 13:00:24.618699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.619660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:24.960 [2024-12-05 13:00:24.622234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.622268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:24.960 [2024-12-05 13:00:24.622279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:29:24.960 [2024-12-05 13:00:24.622288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.634155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.634189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:24.960 [2024-12-05 13:00:24.634200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.750 ms 00:29:24.960 [2024-12-05 13:00:24.634207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.653043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.653075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:24.960 [2024-12-05 13:00:24.653087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.815 ms 00:29:24.960 [2024-12-05 13:00:24.653094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.659315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.659344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:24.960 [2024-12-05 13:00:24.659355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:29:24.960 [2024-12-05 13:00:24.659364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.660578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.660612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:24.960 [2024-12-05 13:00:24.660621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:29:24.960 [2024-12-05 13:00:24.660629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.664186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.664219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:24.960 [2024-12-05 13:00:24.664239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.529 ms 00:29:24.960 [2024-12-05 13:00:24.664248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.715553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.715591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:24.960 [2024-12-05 13:00:24.715601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.271 ms 00:29:24.960 [2024-12-05 13:00:24.715609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.717333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.717364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:24.960 [2024-12-05 13:00:24.717375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:29:24.960 [2024-12-05 13:00:24.717382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.718404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.718433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:24.960 [2024-12-05 13:00:24.718442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:29:24.960 [2024-12-05 13:00:24.718449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.719327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.719356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:24.960 [2024-12-05 13:00:24.719365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:29:24.960 [2024-12-05 13:00:24.719372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.720165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.960 [2024-12-05 13:00:24.720191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:24.960 [2024-12-05 13:00:24.720200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:29:24.960 [2024-12-05 13:00:24.720207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.960 [2024-12-05 13:00:24.720232] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:24.960 [2024-12-05 13:00:24.720251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:29:24.960 [2024-12-05 13:00:24.720269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:24.960 [2024-12-05 13:00:24.720425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.720998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.721006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.721013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.721021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.721028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:24.961 [2024-12-05 13:00:24.721044] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:24.961 [2024-12-05 13:00:24.721052] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fe379a6-2e0e-4789-9696-553f79e62356 00:29:24.961 [2024-12-05 13:00:24.721060] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:29:24.961 [2024-12-05 13:00:24.721067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131008 00:29:24.961 [2024-12-05 13:00:24.721074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:29:24.961 [2024-12-05 13:00:24.721089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:29:24.961 [2024-12-05 13:00:24.721096] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:24.961 [2024-12-05 13:00:24.721103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:24.961 [2024-12-05 13:00:24.721110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:24.961 [2024-12-05 13:00:24.721116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:24.961 [2024-12-05 13:00:24.721123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:24.961 [2024-12-05 13:00:24.721130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.961 [2024-12-05 13:00:24.721138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:24.962 [2024-12-05 13:00:24.721148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:29:24.962 [2024-12-05 13:00:24.721155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.723021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.962 [2024-12-05 13:00:24.723050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:24.962 [2024-12-05 13:00:24.723061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.852 ms 00:29:24.962 [2024-12-05 13:00:24.723070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.723169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:24.962 [2024-12-05 13:00:24.723179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:24.962 [2024-12-05 13:00:24.723188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:29:24.962 [2024-12-05 13:00:24.723197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.729203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.729234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:24.962 [2024-12-05 13:00:24.729244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.729258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.729316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.729325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:24.962 [2024-12-05 13:00:24.729334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.729341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.729409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.729419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:24.962 [2024-12-05 13:00:24.729427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.729435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.729454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.729464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:24.962 [2024-12-05 13:00:24.729476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.729484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.740905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.740951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:24.962 [2024-12-05 13:00:24.740962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.740970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.749961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.750007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:24.962 [2024-12-05 13:00:24.750019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.750028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.750082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.750092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:24.962 [2024-12-05 13:00:24.750101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.750109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.750134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.750143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:24.962 [2024-12-05 13:00:24.750156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.750164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.750232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.750247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:24.962 [2024-12-05 13:00:24.750256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.750264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.750297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.750307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:24.962 [2024-12-05 13:00:24.750321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.750330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.750368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.750378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:24.962 [2024-12-05 13:00:24.750386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.750393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.750442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.962 [2024-12-05 13:00:24.750452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:24.962 [2024-12-05 13:00:24.750463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.962 [2024-12-05 13:00:24.750471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.962 [2024-12-05 13:00:24.750595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 134.768 ms, result 0 00:29:27.486 00:29:27.486 00:29:27.486 13:00:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:30.015 13:00:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:30.015 [2024-12-05 13:00:29.373034] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:29:30.015 [2024-12-05 13:00:29.373616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90731 ] 00:29:30.015 [2024-12-05 13:00:29.534089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.015 [2024-12-05 13:00:29.559417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.015 [2024-12-05 13:00:29.665417] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:30.015 [2024-12-05 13:00:29.665501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:30.015 [2024-12-05 13:00:29.820416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.820482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:30.015 [2024-12-05 13:00:29.820501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:30.015 [2024-12-05 13:00:29.820510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.820566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.820576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:30.015 [2024-12-05 13:00:29.820585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:30.015 [2024-12-05 13:00:29.820593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.820620] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:30.015 [2024-12-05 13:00:29.820928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:30.015 [2024-12-05 13:00:29.820947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.820963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:30.015 [2024-12-05 13:00:29.820981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:29:30.015 [2024-12-05 13:00:29.820989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.822501] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:30.015 [2024-12-05 13:00:29.825079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.825112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:30.015 [2024-12-05 13:00:29.825132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:29:30.015 [2024-12-05 13:00:29.825144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.825258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.825287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:30.015 [2024-12-05 13:00:29.825300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:30.015 [2024-12-05 13:00:29.825314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.832006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.832034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:30.015 [2024-12-05 13:00:29.832049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.612 ms 00:29:30.015 [2024-12-05 13:00:29.832058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.832143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.832156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:30.015 [2024-12-05 13:00:29.832164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:30.015 [2024-12-05 13:00:29.832171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.832221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.832238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:30.015 [2024-12-05 13:00:29.832246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:30.015 [2024-12-05 13:00:29.832256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.832280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:30.015 [2024-12-05 13:00:29.834010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.834034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:30.015 [2024-12-05 13:00:29.834043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:29:30.015 [2024-12-05 13:00:29.834050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.834082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.834091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:30.015 [2024-12-05 13:00:29.834099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:30.015 [2024-12-05 13:00:29.834112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.834147] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:30.015 [2024-12-05 13:00:29.834180] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:30.015 [2024-12-05 13:00:29.834228] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:30.015 [2024-12-05 13:00:29.834256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:30.015 [2024-12-05 13:00:29.834380] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:30.015 [2024-12-05 13:00:29.834397] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:30.015 [2024-12-05 13:00:29.834410] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:30.015 [2024-12-05 13:00:29.834420] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:30.015 [2024-12-05 13:00:29.834439] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:30.015 [2024-12-05 13:00:29.834458] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:30.015 [2024-12-05 13:00:29.834467] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:30.015 [2024-12-05 13:00:29.834475] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:30.015 [2024-12-05 13:00:29.834482] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:30.015 [2024-12-05 13:00:29.834493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.834501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:30.015 [2024-12-05 13:00:29.834509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:29:30.015 [2024-12-05 13:00:29.834517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.834605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.015 [2024-12-05 13:00:29.834618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:30.015 [2024-12-05 13:00:29.834629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:30.015 [2024-12-05 13:00:29.834636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.015 [2024-12-05 13:00:29.834740] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:30.015 [2024-12-05 13:00:29.834751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:30.015 [2024-12-05 13:00:29.834760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:30.015 [2024-12-05 13:00:29.834776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:30.015 [2024-12-05 13:00:29.834785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:30.016 [2024-12-05 13:00:29.834793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:30.016 [2024-12-05 13:00:29.834801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:30.016 [2024-12-05 13:00:29.834820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:30.016 [2024-12-05 13:00:29.834829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:30.016 [2024-12-05 13:00:29.834837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:30.016 [2024-12-05 13:00:29.834845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:30.016 [2024-12-05 13:00:29.834853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:30.016 [2024-12-05 13:00:29.834860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:30.016 [2024-12-05 13:00:29.834870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:30.016 [2024-12-05 13:00:29.834878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:30.016 [2024-12-05 13:00:29.834886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:30.016 [2024-12-05 13:00:29.834893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:30.016 [2024-12-05 13:00:29.834902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:30.016 [2024-12-05 13:00:29.834909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:30.016 [2024-12-05 13:00:29.834921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:30.016 [2024-12-05 13:00:29.834929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:30.016 [2024-12-05 13:00:29.834938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:30.016 [2024-12-05 13:00:29.834946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:30.016 [2024-12-05 13:00:29.834953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:30.016 [2024-12-05 13:00:29.834961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:30.016 [2024-12-05 13:00:29.834969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:30.016 [2024-12-05 13:00:29.834977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:30.016 [2024-12-05 13:00:29.834984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:30.016 [2024-12-05 13:00:29.834992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:30.016 [2024-12-05 13:00:29.835002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:30.016 [2024-12-05 13:00:29.835009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:30.016 [2024-12-05 13:00:29.835017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:30.016 [2024-12-05 13:00:29.835025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:30.016 [2024-12-05 13:00:29.835032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:30.016 [2024-12-05 13:00:29.835039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:30.016 [2024-12-05 13:00:29.835047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:30.016 [2024-12-05 13:00:29.835054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:30.016 [2024-12-05 13:00:29.835060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:30.016 [2024-12-05 13:00:29.835067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:30.016 [2024-12-05 13:00:29.835073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:30.016 [2024-12-05 13:00:29.835079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:30.016 [2024-12-05 13:00:29.835086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:30.016 [2024-12-05 13:00:29.835092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:30.016 [2024-12-05 13:00:29.835098] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:30.016 [2024-12-05 13:00:29.835111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:30.016 [2024-12-05 13:00:29.835120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:30.016 [2024-12-05 13:00:29.835127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:30.016 [2024-12-05 13:00:29.835135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:30.016 [2024-12-05 13:00:29.835141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:30.016 [2024-12-05 13:00:29.835148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:30.016 [2024-12-05 13:00:29.835155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:30.016 [2024-12-05 13:00:29.835162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:30.016 [2024-12-05 13:00:29.835169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:30.016 [2024-12-05 13:00:29.835178] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:30.016 [2024-12-05 13:00:29.835187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:30.016 [2024-12-05 13:00:29.835196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:30.016 [2024-12-05 13:00:29.835203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:30.016 [2024-12-05 13:00:29.835211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:30.016 [2024-12-05 13:00:29.835218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:30.016 [2024-12-05 13:00:29.835225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:30.016 [2024-12-05 13:00:29.835232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:30.016 [2024-12-05 13:00:29.835241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:30.016 [2024-12-05 13:00:29.835248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:30.016 [2024-12-05 13:00:29.835255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:30.016 [2024-12-05 13:00:29.835262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:30.016 [2024-12-05 13:00:29.835269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:30.016 [2024-12-05 13:00:29.835276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:30.016 [2024-12-05 13:00:29.835283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:30.016 [2024-12-05 13:00:29.835290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:30.016 [2024-12-05 13:00:29.835297] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:30.016 [2024-12-05 13:00:29.835305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:30.016 [2024-12-05 13:00:29.835313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:30.016 [2024-12-05 13:00:29.835320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:30.016 [2024-12-05 13:00:29.835327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:30.016 [2024-12-05 13:00:29.835334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:30.016 [2024-12-05 13:00:29.835341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.016 [2024-12-05 13:00:29.835349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:30.016 [2024-12-05 13:00:29.835359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:29:30.016 [2024-12-05 13:00:29.835368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.016 [2024-12-05 13:00:29.847208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.016 [2024-12-05 13:00:29.847244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:30.016 [2024-12-05 13:00:29.847255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.798 ms 00:29:30.016 [2024-12-05 13:00:29.847270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.016 [2024-12-05 13:00:29.847352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.016 [2024-12-05 13:00:29.847361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:30.016 [2024-12-05 13:00:29.847370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:30.016 [2024-12-05 13:00:29.847377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.866361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.866413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:30.275 [2024-12-05 13:00:29.866430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.922 ms 00:29:30.275 [2024-12-05 13:00:29.866442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.866506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.866520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:30.275 [2024-12-05 13:00:29.866532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:30.275 [2024-12-05 13:00:29.866543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.867109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.867145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:30.275 [2024-12-05 13:00:29.867166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:29:30.275 [2024-12-05 13:00:29.867193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.867380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.867393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:30.275 [2024-12-05 13:00:29.867405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:29:30.275 [2024-12-05 13:00:29.867415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.874768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.874839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:30.275 [2024-12-05 13:00:29.874861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.327 ms 00:29:30.275 [2024-12-05 13:00:29.874872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.877918] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:30.275 [2024-12-05 13:00:29.877963] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:30.275 [2024-12-05 13:00:29.877987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.877998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:30.275 [2024-12-05 13:00:29.878010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.976 ms 00:29:30.275 [2024-12-05 13:00:29.878020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.892770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.892802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:30.275 [2024-12-05 13:00:29.892828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.698 ms 00:29:30.275 [2024-12-05 13:00:29.892836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.894583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.894613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:30.275 [2024-12-05 13:00:29.894622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.706 ms 00:29:30.275 [2024-12-05 13:00:29.894630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.895906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.895932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:30.275 [2024-12-05 13:00:29.895941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:29:30.275 [2024-12-05 13:00:29.895949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.896303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.896322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:30.275 [2024-12-05 13:00:29.896337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:29:30.275 [2024-12-05 13:00:29.896344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.913818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.275 [2024-12-05 13:00:29.913879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:30.275 [2024-12-05 13:00:29.913892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.438 ms 00:29:30.275 [2024-12-05 13:00:29.913901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.275 [2024-12-05 13:00:29.921569] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:30.276 [2024-12-05 13:00:29.924881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.276 [2024-12-05 13:00:29.924914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:30.276 [2024-12-05 13:00:29.924927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.936 ms 00:29:30.276 [2024-12-05 13:00:29.924936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.276 [2024-12-05 13:00:29.925037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.276 [2024-12-05 13:00:29.925055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:30.276 [2024-12-05 13:00:29.925070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:30.276 [2024-12-05 13:00:29.925079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.276 [2024-12-05 13:00:29.926988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.276 [2024-12-05 13:00:29.927026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:30.276 [2024-12-05 13:00:29.927036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:29:30.276 [2024-12-05 13:00:29.927046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.276 [2024-12-05 13:00:29.927083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.276 [2024-12-05 13:00:29.927092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:30.276 [2024-12-05 13:00:29.927101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:30.276 [2024-12-05 13:00:29.927109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.276 [2024-12-05 13:00:29.927146] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:30.276 [2024-12-05 13:00:29.927156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.276 [2024-12-05 13:00:29.927168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:30.276 [2024-12-05 13:00:29.927179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:30.276 [2024-12-05 13:00:29.927186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.276 [2024-12-05 13:00:29.930924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.276 [2024-12-05 13:00:29.930955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:30.276 [2024-12-05 13:00:29.930973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.719 ms 00:29:30.276 [2024-12-05 13:00:29.930981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.276 [2024-12-05 13:00:29.931062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:30.276 [2024-12-05 13:00:29.931082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:30.276 [2024-12-05 13:00:29.931091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:30.276 [2024-12-05 13:00:29.931102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:30.276 [2024-12-05 13:00:29.933522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.936 ms, result 0 00:29:31.650  [2024-12-05T13:00:32.433Z] Copying: 952/1048576 [kB] (952 kBps) [2024-12-05T13:00:33.364Z] Copying: 6228/1048576 [kB] (5276 kBps) [2024-12-05T13:00:34.302Z] Copying: 56/1024 [MB] (50 MBps) [2024-12-05T13:00:35.244Z] Copying: 92/1024 [MB] (35 MBps) [2024-12-05T13:00:36.181Z] Copying: 119/1024 [MB] (27 MBps) [2024-12-05T13:00:37.175Z] Copying: 149/1024 [MB] (30 MBps) [2024-12-05T13:00:38.545Z] Copying: 170/1024 [MB] (20 MBps) [2024-12-05T13:00:39.483Z] Copying: 212/1024 [MB] (42 MBps) [2024-12-05T13:00:40.422Z] Copying: 256/1024 [MB] (43 MBps) [2024-12-05T13:00:41.358Z] Copying: 289/1024 [MB] (33 MBps) [2024-12-05T13:00:42.297Z] Copying: 324/1024 [MB] (35 MBps) [2024-12-05T13:00:43.235Z] Copying: 351/1024 [MB] (26 MBps) [2024-12-05T13:00:44.168Z] Copying: 384/1024 [MB] (32 MBps) [2024-12-05T13:00:45.541Z] Copying: 431/1024 [MB] (46 MBps) [2024-12-05T13:00:46.475Z] Copying: 482/1024 [MB] (50 MBps) [2024-12-05T13:00:47.412Z] Copying: 531/1024 [MB] (49 MBps) [2024-12-05T13:00:48.347Z] Copying: 582/1024 [MB] (50 MBps) [2024-12-05T13:00:49.351Z] Copying: 627/1024 [MB] (45 MBps) [2024-12-05T13:00:50.318Z] Copying: 674/1024 [MB] (46 MBps) [2024-12-05T13:00:51.246Z] Copying: 725/1024 [MB] (51 MBps) [2024-12-05T13:00:52.175Z] Copying: 775/1024 [MB] (50 MBps) [2024-12-05T13:00:53.548Z] Copying: 825/1024 [MB] (49 MBps) [2024-12-05T13:00:54.481Z] Copying: 871/1024 [MB] (46 MBps) [2024-12-05T13:00:55.412Z] Copying: 922/1024 [MB] (50 MBps) [2024-12-05T13:00:56.346Z] Copying: 970/1024 [MB] (48 MBps) [2024-12-05T13:00:56.346Z] Copying: 1020/1024 [MB] (50 MBps) [2024-12-05T13:00:56.605Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-12-05 13:00:56.561124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.561201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:56.753 [2024-12-05 13:00:56.561220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:56.753 [2024-12-05 13:00:56.561239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.561271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:56.753 [2024-12-05 13:00:56.561959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.561992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:56.753 [2024-12-05 13:00:56.562007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:29:56.753 [2024-12-05 13:00:56.562019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.562351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.562375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:56.753 [2024-12-05 13:00:56.562388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:29:56.753 [2024-12-05 13:00:56.562401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.573230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.573272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:56.753 [2024-12-05 13:00:56.573292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.804 ms 00:29:56.753 [2024-12-05 13:00:56.573301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.579544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.579578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:56.753 [2024-12-05 13:00:56.579588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.218 ms 00:29:56.753 [2024-12-05 13:00:56.579598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.581100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.581133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:56.753 [2024-12-05 13:00:56.581143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:29:56.753 [2024-12-05 13:00:56.581150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.583815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.583914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:56.753 [2024-12-05 13:00:56.583923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.626 ms 00:29:56.753 [2024-12-05 13:00:56.583932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.584988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.585022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:56.753 [2024-12-05 13:00:56.585031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:29:56.753 [2024-12-05 13:00:56.585050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.586558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.586590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:56.753 [2024-12-05 13:00:56.586599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:29:56.753 [2024-12-05 13:00:56.586606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.587629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.587662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:56.753 [2024-12-05 13:00:56.587671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:29:56.753 [2024-12-05 13:00:56.587677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.588538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.753 [2024-12-05 13:00:56.588570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:56.753 [2024-12-05 13:00:56.588578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:29:56.753 [2024-12-05 13:00:56.588586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.753 [2024-12-05 13:00:56.589424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.754 [2024-12-05 13:00:56.589456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:56.754 [2024-12-05 13:00:56.589465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:29:56.754 [2024-12-05 13:00:56.589473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.754 [2024-12-05 13:00:56.589499] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:56.754 [2024-12-05 13:00:56.589514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:56.754 [2024-12-05 13:00:56.589532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:56.754 [2024-12-05 13:00:56.589541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.589997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:56.754 [2024-12-05 13:00:56.590193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:56.755 [2024-12-05 13:00:56.590333] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:56.755 [2024-12-05 13:00:56.590351] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fe379a6-2e0e-4789-9696-553f79e62356 00:29:56.755 [2024-12-05 13:00:56.590362] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:56.755 [2024-12-05 13:00:56.590370] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 134592 00:29:56.755 [2024-12-05 13:00:56.590378] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 132608 00:29:56.755 [2024-12-05 13:00:56.590386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0150 00:29:56.755 [2024-12-05 13:00:56.590394] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:56.755 [2024-12-05 13:00:56.590402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:56.755 [2024-12-05 13:00:56.590410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:56.755 [2024-12-05 13:00:56.590416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:56.755 [2024-12-05 13:00:56.590428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:56.755 [2024-12-05 13:00:56.590435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.755 [2024-12-05 13:00:56.590443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:56.755 [2024-12-05 13:00:56.590451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:29:56.755 [2024-12-05 13:00:56.590462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.755 [2024-12-05 13:00:56.592251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.755 [2024-12-05 13:00:56.592276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:56.755 [2024-12-05 13:00:56.592286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.772 ms 00:29:56.755 [2024-12-05 13:00:56.592294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.755 [2024-12-05 13:00:56.592393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.755 [2024-12-05 13:00:56.592408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:56.755 [2024-12-05 13:00:56.592416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:29:56.755 [2024-12-05 13:00:56.592423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.755 [2024-12-05 13:00:56.598309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.755 [2024-12-05 13:00:56.598344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:56.755 [2024-12-05 13:00:56.598360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.755 [2024-12-05 13:00:56.598369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.755 [2024-12-05 13:00:56.598423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.755 [2024-12-05 13:00:56.598435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:56.755 [2024-12-05 13:00:56.598443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.755 [2024-12-05 13:00:56.598451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.755 [2024-12-05 13:00:56.598507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.755 [2024-12-05 13:00:56.598523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:56.755 [2024-12-05 13:00:56.598531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.755 [2024-12-05 13:00:56.598539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.755 [2024-12-05 13:00:56.598555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.755 [2024-12-05 13:00:56.598563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:56.755 [2024-12-05 13:00:56.598574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.755 [2024-12-05 13:00:56.598581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.013 [2024-12-05 13:00:56.609882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.013 [2024-12-05 13:00:56.609938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.013 [2024-12-05 13:00:56.609950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.013 [2024-12-05 13:00:56.609958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.013 [2024-12-05 13:00:56.618782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.013 [2024-12-05 13:00:56.618844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.013 [2024-12-05 13:00:56.618865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.013 [2024-12-05 13:00:56.618874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.013 [2024-12-05 13:00:56.618929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.013 [2024-12-05 13:00:56.618939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.013 [2024-12-05 13:00:56.618947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.013 [2024-12-05 13:00:56.618955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.013 [2024-12-05 13:00:56.618980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.013 [2024-12-05 13:00:56.618988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.013 [2024-12-05 13:00:56.618996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.013 [2024-12-05 13:00:56.619004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.013 [2024-12-05 13:00:56.619072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.013 [2024-12-05 13:00:56.619088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.013 [2024-12-05 13:00:56.619096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.013 [2024-12-05 13:00:56.619104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.013 [2024-12-05 13:00:56.619135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.014 [2024-12-05 13:00:56.619144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:57.014 [2024-12-05 13:00:56.619152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.014 [2024-12-05 13:00:56.619160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.014 [2024-12-05 13:00:56.619201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.014 [2024-12-05 13:00:56.619211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.014 [2024-12-05 13:00:56.619219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.014 [2024-12-05 13:00:56.619226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.014 [2024-12-05 13:00:56.619270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.014 [2024-12-05 13:00:56.619281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.014 [2024-12-05 13:00:56.619294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.014 [2024-12-05 13:00:56.619302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.014 [2024-12-05 13:00:56.619428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.282 ms, result 0 00:30:01.193 00:30:01.193 00:30:01.193 13:01:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:03.720 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:03.720 13:01:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:03.720 [2024-12-05 13:01:03.164281] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:03.720 [2024-12-05 13:01:03.164401] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91046 ] 00:30:03.720 [2024-12-05 13:01:03.320548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.720 [2024-12-05 13:01:03.344870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.720 [2024-12-05 13:01:03.448860] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:03.720 [2024-12-05 13:01:03.448945] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:03.979 [2024-12-05 13:01:03.603937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.604007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:03.979 [2024-12-05 13:01:03.604023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:03.979 [2024-12-05 13:01:03.604032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.604087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.604102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:03.979 [2024-12-05 13:01:03.604110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:03.979 [2024-12-05 13:01:03.604118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.604142] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:03.979 [2024-12-05 13:01:03.604455] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:03.979 [2024-12-05 13:01:03.604476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.604485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:03.979 [2024-12-05 13:01:03.604496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:30:03.979 [2024-12-05 13:01:03.604504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.606176] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:03.979 [2024-12-05 13:01:03.608825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.608863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:03.979 [2024-12-05 13:01:03.608882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.651 ms 00:30:03.979 [2024-12-05 13:01:03.608894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.608950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.608969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:03.979 [2024-12-05 13:01:03.608978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:03.979 [2024-12-05 13:01:03.608986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.615439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.615479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:03.979 [2024-12-05 13:01:03.615495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.379 ms 00:30:03.979 [2024-12-05 13:01:03.615503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.615602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.615611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:03.979 [2024-12-05 13:01:03.615620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:03.979 [2024-12-05 13:01:03.615630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.615690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.615701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:03.979 [2024-12-05 13:01:03.615709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:03.979 [2024-12-05 13:01:03.615719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.615744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:03.979 [2024-12-05 13:01:03.617438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.617472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:03.979 [2024-12-05 13:01:03.617481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:30:03.979 [2024-12-05 13:01:03.617489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.617526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.617534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:03.979 [2024-12-05 13:01:03.617543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:03.979 [2024-12-05 13:01:03.617556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.617581] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:03.979 [2024-12-05 13:01:03.617608] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:03.979 [2024-12-05 13:01:03.617650] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:03.979 [2024-12-05 13:01:03.617674] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:03.979 [2024-12-05 13:01:03.617789] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:03.979 [2024-12-05 13:01:03.617800] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:03.979 [2024-12-05 13:01:03.617825] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:03.979 [2024-12-05 13:01:03.617836] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:03.979 [2024-12-05 13:01:03.617844] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:03.979 [2024-12-05 13:01:03.617852] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:03.979 [2024-12-05 13:01:03.617861] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:03.979 [2024-12-05 13:01:03.617868] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:03.979 [2024-12-05 13:01:03.617876] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:03.979 [2024-12-05 13:01:03.617884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.617891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:03.979 [2024-12-05 13:01:03.617903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:30:03.979 [2024-12-05 13:01:03.617916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.618002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.979 [2024-12-05 13:01:03.618012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:03.979 [2024-12-05 13:01:03.618025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:30:03.979 [2024-12-05 13:01:03.618032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.979 [2024-12-05 13:01:03.618154] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:03.979 [2024-12-05 13:01:03.618170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:03.979 [2024-12-05 13:01:03.618180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:03.979 [2024-12-05 13:01:03.618196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:03.979 [2024-12-05 13:01:03.618213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:03.979 [2024-12-05 13:01:03.618229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:03.979 [2024-12-05 13:01:03.618237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:03.979 [2024-12-05 13:01:03.618255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:03.979 [2024-12-05 13:01:03.618263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:03.979 [2024-12-05 13:01:03.618270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:03.979 [2024-12-05 13:01:03.618278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:03.979 [2024-12-05 13:01:03.618286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:03.979 [2024-12-05 13:01:03.618294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:03.979 [2024-12-05 13:01:03.618309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:03.979 [2024-12-05 13:01:03.618317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:03.979 [2024-12-05 13:01:03.618332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:03.979 [2024-12-05 13:01:03.618348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:03.979 [2024-12-05 13:01:03.618356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:03.979 [2024-12-05 13:01:03.618374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:03.979 [2024-12-05 13:01:03.618387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:03.979 [2024-12-05 13:01:03.618404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:03.979 [2024-12-05 13:01:03.618411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:03.979 [2024-12-05 13:01:03.618419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:03.980 [2024-12-05 13:01:03.618427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:03.980 [2024-12-05 13:01:03.618435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:03.980 [2024-12-05 13:01:03.618442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:03.980 [2024-12-05 13:01:03.618450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:03.980 [2024-12-05 13:01:03.618458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:03.980 [2024-12-05 13:01:03.618465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:03.980 [2024-12-05 13:01:03.618473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:03.980 [2024-12-05 13:01:03.618481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:03.980 [2024-12-05 13:01:03.618488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.980 [2024-12-05 13:01:03.618495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:03.980 [2024-12-05 13:01:03.618501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:03.980 [2024-12-05 13:01:03.618510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.980 [2024-12-05 13:01:03.618516] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:03.980 [2024-12-05 13:01:03.618526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:03.980 [2024-12-05 13:01:03.618533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:03.980 [2024-12-05 13:01:03.618541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:03.980 [2024-12-05 13:01:03.618548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:03.980 [2024-12-05 13:01:03.618555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:03.980 [2024-12-05 13:01:03.618561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:03.980 [2024-12-05 13:01:03.618568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:03.980 [2024-12-05 13:01:03.618574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:03.980 [2024-12-05 13:01:03.618580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:03.980 [2024-12-05 13:01:03.618589] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:03.980 [2024-12-05 13:01:03.618601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:03.980 [2024-12-05 13:01:03.618609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:03.980 [2024-12-05 13:01:03.618618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:03.980 [2024-12-05 13:01:03.618625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:03.980 [2024-12-05 13:01:03.618634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:03.980 [2024-12-05 13:01:03.618645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:03.980 [2024-12-05 13:01:03.618653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:03.980 [2024-12-05 13:01:03.618660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:03.980 [2024-12-05 13:01:03.618667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:03.980 [2024-12-05 13:01:03.618674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:03.980 [2024-12-05 13:01:03.618681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:03.980 [2024-12-05 13:01:03.618688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:03.980 [2024-12-05 13:01:03.618695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:03.980 [2024-12-05 13:01:03.618702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:03.980 [2024-12-05 13:01:03.618711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:03.980 [2024-12-05 13:01:03.618718] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:03.980 [2024-12-05 13:01:03.618727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:03.980 [2024-12-05 13:01:03.618735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:03.980 [2024-12-05 13:01:03.618742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:03.980 [2024-12-05 13:01:03.618749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:03.980 [2024-12-05 13:01:03.618758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:03.980 [2024-12-05 13:01:03.618765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.618773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:03.980 [2024-12-05 13:01:03.618780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:30:03.980 [2024-12-05 13:01:03.618792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.630236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.630274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:03.980 [2024-12-05 13:01:03.630290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.381 ms 00:30:03.980 [2024-12-05 13:01:03.630299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.630387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.630395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:03.980 [2024-12-05 13:01:03.630404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:03.980 [2024-12-05 13:01:03.630412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.649297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.649341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:03.980 [2024-12-05 13:01:03.649354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.819 ms 00:30:03.980 [2024-12-05 13:01:03.649369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.649418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.649428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:03.980 [2024-12-05 13:01:03.649437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:03.980 [2024-12-05 13:01:03.649451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.649965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.649998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:03.980 [2024-12-05 13:01:03.650011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:30:03.980 [2024-12-05 13:01:03.650022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.650196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.650221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:03.980 [2024-12-05 13:01:03.650233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:30:03.980 [2024-12-05 13:01:03.650243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.657253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.657289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:03.980 [2024-12-05 13:01:03.657302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.987 ms 00:30:03.980 [2024-12-05 13:01:03.657318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.660195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:03.980 [2024-12-05 13:01:03.660237] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:03.980 [2024-12-05 13:01:03.660255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.660265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:03.980 [2024-12-05 13:01:03.660276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:30:03.980 [2024-12-05 13:01:03.660286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.674874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.674916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:03.980 [2024-12-05 13:01:03.674930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.539 ms 00:30:03.980 [2024-12-05 13:01:03.674938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.676467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.676499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:03.980 [2024-12-05 13:01:03.676507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:30:03.980 [2024-12-05 13:01:03.676515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.677735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.677763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:03.980 [2024-12-05 13:01:03.677772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:30:03.980 [2024-12-05 13:01:03.677779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.678116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.678139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:03.980 [2024-12-05 13:01:03.678148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:30:03.980 [2024-12-05 13:01:03.678159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.980 [2024-12-05 13:01:03.695227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.980 [2024-12-05 13:01:03.695301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:03.980 [2024-12-05 13:01:03.695314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.043 ms 00:30:03.980 [2024-12-05 13:01:03.695323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.981 [2024-12-05 13:01:03.703064] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:03.981 [2024-12-05 13:01:03.706063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.981 [2024-12-05 13:01:03.706097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:03.981 [2024-12-05 13:01:03.706109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.696 ms 00:30:03.981 [2024-12-05 13:01:03.706122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.981 [2024-12-05 13:01:03.706185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.981 [2024-12-05 13:01:03.706196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:03.981 [2024-12-05 13:01:03.706205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:03.981 [2024-12-05 13:01:03.706214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.981 [2024-12-05 13:01:03.706942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.981 [2024-12-05 13:01:03.706976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:03.981 [2024-12-05 13:01:03.706986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:30:03.981 [2024-12-05 13:01:03.706994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.981 [2024-12-05 13:01:03.707028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.981 [2024-12-05 13:01:03.707037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:03.981 [2024-12-05 13:01:03.707046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:03.981 [2024-12-05 13:01:03.707057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.981 [2024-12-05 13:01:03.707092] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:03.981 [2024-12-05 13:01:03.707103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.981 [2024-12-05 13:01:03.707110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:03.981 [2024-12-05 13:01:03.707124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:03.981 [2024-12-05 13:01:03.707132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.981 [2024-12-05 13:01:03.710749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.981 [2024-12-05 13:01:03.710785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:03.981 [2024-12-05 13:01:03.710795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.599 ms 00:30:03.981 [2024-12-05 13:01:03.710829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.981 [2024-12-05 13:01:03.710896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:03.981 [2024-12-05 13:01:03.710906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:03.981 [2024-12-05 13:01:03.710915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:03.981 [2024-12-05 13:01:03.710926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:03.981 [2024-12-05 13:01:03.711956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.555 ms, result 0 00:30:05.358  [2024-12-05T13:01:06.149Z] Copying: 46/1024 [MB] (46 MBps) [2024-12-05T13:01:07.079Z] Copying: 93/1024 [MB] (47 MBps) [2024-12-05T13:01:08.011Z] Copying: 139/1024 [MB] (45 MBps) [2024-12-05T13:01:08.944Z] Copying: 184/1024 [MB] (45 MBps) [2024-12-05T13:01:10.315Z] Copying: 229/1024 [MB] (44 MBps) [2024-12-05T13:01:11.248Z] Copying: 276/1024 [MB] (46 MBps) [2024-12-05T13:01:12.181Z] Copying: 322/1024 [MB] (45 MBps) [2024-12-05T13:01:13.113Z] Copying: 367/1024 [MB] (45 MBps) [2024-12-05T13:01:14.088Z] Copying: 413/1024 [MB] (45 MBps) [2024-12-05T13:01:15.021Z] Copying: 461/1024 [MB] (47 MBps) [2024-12-05T13:01:15.954Z] Copying: 506/1024 [MB] (45 MBps) [2024-12-05T13:01:16.885Z] Copying: 553/1024 [MB] (46 MBps) [2024-12-05T13:01:18.258Z] Copying: 599/1024 [MB] (45 MBps) [2024-12-05T13:01:19.190Z] Copying: 644/1024 [MB] (45 MBps) [2024-12-05T13:01:20.123Z] Copying: 692/1024 [MB] (47 MBps) [2024-12-05T13:01:21.059Z] Copying: 740/1024 [MB] (47 MBps) [2024-12-05T13:01:21.992Z] Copying: 785/1024 [MB] (45 MBps) [2024-12-05T13:01:22.943Z] Copying: 830/1024 [MB] (44 MBps) [2024-12-05T13:01:24.314Z] Copying: 877/1024 [MB] (47 MBps) [2024-12-05T13:01:25.246Z] Copying: 921/1024 [MB] (43 MBps) [2024-12-05T13:01:26.175Z] Copying: 964/1024 [MB] (43 MBps) [2024-12-05T13:01:26.433Z] Copying: 1009/1024 [MB] (44 MBps) [2024-12-05T13:01:26.433Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-12-05 13:01:26.366910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.581 [2024-12-05 13:01:26.366991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:26.581 [2024-12-05 13:01:26.367017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:26.581 [2024-12-05 13:01:26.367028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.581 [2024-12-05 13:01:26.367058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:26.581 [2024-12-05 13:01:26.367636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.581 [2024-12-05 13:01:26.367662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:26.581 [2024-12-05 13:01:26.367676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:30:26.581 [2024-12-05 13:01:26.367687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.581 [2024-12-05 13:01:26.367980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.581 [2024-12-05 13:01:26.367993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:26.581 [2024-12-05 13:01:26.368005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:30:26.581 [2024-12-05 13:01:26.368019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.581 [2024-12-05 13:01:26.371434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.581 [2024-12-05 13:01:26.371478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:26.581 [2024-12-05 13:01:26.371490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.396 ms 00:30:26.581 [2024-12-05 13:01:26.371499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.581 [2024-12-05 13:01:26.377931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.581 [2024-12-05 13:01:26.377982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:26.581 [2024-12-05 13:01:26.377995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.402 ms 00:30:26.581 [2024-12-05 13:01:26.378014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.581 [2024-12-05 13:01:26.381421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.581 [2024-12-05 13:01:26.381468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:26.581 [2024-12-05 13:01:26.381480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.313 ms 00:30:26.581 [2024-12-05 13:01:26.381489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.581 [2024-12-05 13:01:26.384443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.581 [2024-12-05 13:01:26.384490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:26.581 [2024-12-05 13:01:26.384504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.930 ms 00:30:26.581 [2024-12-05 13:01:26.384515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.581 [2024-12-05 13:01:26.385530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.582 [2024-12-05 13:01:26.385569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:26.582 [2024-12-05 13:01:26.385593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:30:26.582 [2024-12-05 13:01:26.385606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.582 [2024-12-05 13:01:26.387006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.582 [2024-12-05 13:01:26.387043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:26.582 [2024-12-05 13:01:26.387055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:30:26.582 [2024-12-05 13:01:26.387064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.582 [2024-12-05 13:01:26.388113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.582 [2024-12-05 13:01:26.388148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:26.582 [2024-12-05 13:01:26.388159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:30:26.582 [2024-12-05 13:01:26.388167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.582 [2024-12-05 13:01:26.389004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.582 [2024-12-05 13:01:26.389042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:26.582 [2024-12-05 13:01:26.389052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:30:26.582 [2024-12-05 13:01:26.389062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.582 [2024-12-05 13:01:26.389908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.582 [2024-12-05 13:01:26.389944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:26.582 [2024-12-05 13:01:26.389956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:30:26.582 [2024-12-05 13:01:26.389965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.582 [2024-12-05 13:01:26.389986] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:26.582 [2024-12-05 13:01:26.390003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:26.582 [2024-12-05 13:01:26.390016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:26.582 [2024-12-05 13:01:26.390027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:26.582 [2024-12-05 13:01:26.390904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.390991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.391000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.391015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.391025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:26.583 [2024-12-05 13:01:26.391043] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:26.583 [2024-12-05 13:01:26.391053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fe379a6-2e0e-4789-9696-553f79e62356 00:30:26.583 [2024-12-05 13:01:26.391063] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:26.583 [2024-12-05 13:01:26.391072] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:26.583 [2024-12-05 13:01:26.391082] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:26.583 [2024-12-05 13:01:26.391100] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:26.583 [2024-12-05 13:01:26.391109] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:26.583 [2024-12-05 13:01:26.391119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:26.583 [2024-12-05 13:01:26.391135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:26.583 [2024-12-05 13:01:26.391154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:26.583 [2024-12-05 13:01:26.391162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:26.583 [2024-12-05 13:01:26.391172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.583 [2024-12-05 13:01:26.391181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:26.583 [2024-12-05 13:01:26.391191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:30:26.583 [2024-12-05 13:01:26.391199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.393053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.583 [2024-12-05 13:01:26.393092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:26.583 [2024-12-05 13:01:26.393105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:30:26.583 [2024-12-05 13:01:26.393116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.393224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.583 [2024-12-05 13:01:26.393234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:26.583 [2024-12-05 13:01:26.393245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:30:26.583 [2024-12-05 13:01:26.393254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.399248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.399306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:26.583 [2024-12-05 13:01:26.399325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.399335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.399426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.399436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:26.583 [2024-12-05 13:01:26.399447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.399457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.399537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.399549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:26.583 [2024-12-05 13:01:26.399560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.399573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.399593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.399603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:26.583 [2024-12-05 13:01:26.399614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.399624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.411404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.411485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:26.583 [2024-12-05 13:01:26.411503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.411527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.420478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.420561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:26.583 [2024-12-05 13:01:26.420577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.420587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.420668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.420679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:26.583 [2024-12-05 13:01:26.420690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.420699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.420734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.420744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:26.583 [2024-12-05 13:01:26.420753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.420762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.420935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.420948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:26.583 [2024-12-05 13:01:26.420957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.420966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.421000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.421015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:26.583 [2024-12-05 13:01:26.421025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.421035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.421093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.421116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:26.583 [2024-12-05 13:01:26.421135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.421148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.421220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:26.583 [2024-12-05 13:01:26.421234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:26.583 [2024-12-05 13:01:26.421250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:26.583 [2024-12-05 13:01:26.421269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.583 [2024-12-05 13:01:26.421437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.493 ms, result 0 00:30:26.882 00:30:26.882 00:30:26.882 13:01:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:29.406 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:29.406 13:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:29.406 13:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:29.406 13:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:29.406 13:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:29.406 13:01:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:29.406 13:01:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:29.406 13:01:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:29.406 13:01:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89720 00:30:29.406 13:01:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 89720 ']' 00:30:29.406 13:01:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 89720 00:30:29.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89720) - No such process 00:30:29.406 Process with pid 89720 is not found 00:30:29.406 13:01:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 89720 is not found' 00:30:29.406 13:01:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:29.664 Remove shared memory files 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:29.664 00:30:29.664 real 2m32.372s 00:30:29.664 user 2m50.290s 00:30:29.664 sys 0m26.174s 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.664 13:01:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:29.664 ************************************ 00:30:29.664 END TEST ftl_dirty_shutdown 00:30:29.664 ************************************ 00:30:29.664 13:01:29 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:29.664 13:01:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:29.664 13:01:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.664 13:01:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:29.664 ************************************ 00:30:29.664 START TEST ftl_upgrade_shutdown 00:30:29.664 ************************************ 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:29.664 * Looking for test storage... 00:30:29.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.664 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:29.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.922 --rc genhtml_branch_coverage=1 00:30:29.922 --rc genhtml_function_coverage=1 00:30:29.922 --rc genhtml_legend=1 00:30:29.922 --rc geninfo_all_blocks=1 00:30:29.922 --rc geninfo_unexecuted_blocks=1 00:30:29.922 00:30:29.922 ' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:29.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.922 --rc genhtml_branch_coverage=1 00:30:29.922 --rc genhtml_function_coverage=1 00:30:29.922 --rc genhtml_legend=1 00:30:29.922 --rc geninfo_all_blocks=1 00:30:29.922 --rc geninfo_unexecuted_blocks=1 00:30:29.922 00:30:29.922 ' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:29.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.922 --rc genhtml_branch_coverage=1 00:30:29.922 --rc genhtml_function_coverage=1 00:30:29.922 --rc genhtml_legend=1 00:30:29.922 --rc geninfo_all_blocks=1 00:30:29.922 --rc geninfo_unexecuted_blocks=1 00:30:29.922 00:30:29.922 ' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:29.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.922 --rc genhtml_branch_coverage=1 00:30:29.922 --rc genhtml_function_coverage=1 00:30:29.922 --rc genhtml_legend=1 00:30:29.922 --rc geninfo_all_blocks=1 00:30:29.922 --rc geninfo_unexecuted_blocks=1 00:30:29.922 00:30:29.922 ' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:29.922 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91386 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91386 00:30:29.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91386 ']' 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.923 13:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:29.923 [2024-12-05 13:01:29.636861] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:29.923 [2024-12-05 13:01:29.637019] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91386 ] 00:30:30.180 [2024-12-05 13:01:29.798375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.180 [2024-12-05 13:01:29.834749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:30.746 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:31.004 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:31.004 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:31.004 13:01:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:31.004 13:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:31.004 13:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:31.004 13:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:31.004 13:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:31.004 13:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:31.263 13:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:31.263 { 00:30:31.263 "name": "basen1", 00:30:31.263 "aliases": [ 00:30:31.263 "8992487f-d4ef-49d5-ac7e-265897e833cd" 00:30:31.263 ], 00:30:31.263 "product_name": "NVMe disk", 00:30:31.263 "block_size": 4096, 00:30:31.263 "num_blocks": 1310720, 00:30:31.263 "uuid": "8992487f-d4ef-49d5-ac7e-265897e833cd", 00:30:31.263 "numa_id": -1, 00:30:31.263 "assigned_rate_limits": { 00:30:31.263 "rw_ios_per_sec": 0, 00:30:31.263 "rw_mbytes_per_sec": 0, 00:30:31.263 "r_mbytes_per_sec": 0, 00:30:31.263 "w_mbytes_per_sec": 0 00:30:31.263 }, 00:30:31.263 "claimed": true, 00:30:31.263 "claim_type": "read_many_write_one", 00:30:31.263 "zoned": false, 00:30:31.263 "supported_io_types": { 00:30:31.263 "read": true, 00:30:31.263 "write": true, 00:30:31.263 "unmap": true, 00:30:31.263 "flush": true, 00:30:31.263 "reset": true, 00:30:31.263 "nvme_admin": true, 00:30:31.263 "nvme_io": true, 00:30:31.263 "nvme_io_md": false, 00:30:31.263 "write_zeroes": true, 00:30:31.263 "zcopy": false, 00:30:31.263 "get_zone_info": false, 00:30:31.263 "zone_management": false, 00:30:31.263 "zone_append": false, 00:30:31.263 "compare": true, 00:30:31.263 "compare_and_write": false, 00:30:31.263 "abort": true, 00:30:31.263 "seek_hole": false, 00:30:31.263 "seek_data": false, 00:30:31.263 "copy": true, 00:30:31.264 "nvme_iov_md": false 00:30:31.264 }, 00:30:31.264 "driver_specific": { 00:30:31.264 "nvme": [ 00:30:31.264 { 00:30:31.264 "pci_address": "0000:00:11.0", 00:30:31.264 "trid": { 00:30:31.264 "trtype": "PCIe", 00:30:31.264 "traddr": "0000:00:11.0" 00:30:31.264 }, 00:30:31.264 "ctrlr_data": { 00:30:31.264 "cntlid": 0, 00:30:31.264 "vendor_id": "0x1b36", 00:30:31.264 "model_number": "QEMU NVMe Ctrl", 00:30:31.264 "serial_number": "12341", 00:30:31.264 "firmware_revision": "8.0.0", 00:30:31.264 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:31.264 "oacs": { 00:30:31.264 "security": 0, 00:30:31.264 "format": 1, 00:30:31.264 "firmware": 0, 00:30:31.264 "ns_manage": 1 00:30:31.264 }, 00:30:31.264 "multi_ctrlr": false, 00:30:31.264 "ana_reporting": false 00:30:31.264 }, 00:30:31.264 "vs": { 00:30:31.264 "nvme_version": "1.4" 00:30:31.264 }, 00:30:31.264 "ns_data": { 00:30:31.264 "id": 1, 00:30:31.264 "can_share": false 00:30:31.264 } 00:30:31.264 } 00:30:31.264 ], 00:30:31.264 "mp_policy": "active_passive" 00:30:31.264 } 00:30:31.264 } 00:30:31.264 ]' 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:31.264 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:31.523 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=eb330788-87a8-4a32-980e-b36def85bd75 00:30:31.523 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:31.523 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb330788-87a8-4a32-980e-b36def85bd75 00:30:32.089 13:01:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=82ea626c-4a05-4a34-8e18-91446a073e65 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 82ea626c-4a05-4a34-8e18-91446a073e65 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f8a4249f-9222-4a19-8b54-deecd066a271 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f8a4249f-9222-4a19-8b54-deecd066a271 ]] 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f8a4249f-9222-4a19-8b54-deecd066a271 5120 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f8a4249f-9222-4a19-8b54-deecd066a271 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f8a4249f-9222-4a19-8b54-deecd066a271 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f8a4249f-9222-4a19-8b54-deecd066a271 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:33.462 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:33.721 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8a4249f-9222-4a19-8b54-deecd066a271 00:30:33.721 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:33.721 { 00:30:33.721 "name": "f8a4249f-9222-4a19-8b54-deecd066a271", 00:30:33.721 "aliases": [ 00:30:33.721 "lvs/basen1p0" 00:30:33.721 ], 00:30:33.721 "product_name": "Logical Volume", 00:30:33.721 "block_size": 4096, 00:30:33.721 "num_blocks": 5242880, 00:30:33.721 "uuid": "f8a4249f-9222-4a19-8b54-deecd066a271", 00:30:33.721 "assigned_rate_limits": { 00:30:33.721 "rw_ios_per_sec": 0, 00:30:33.721 "rw_mbytes_per_sec": 0, 00:30:33.721 "r_mbytes_per_sec": 0, 00:30:33.721 "w_mbytes_per_sec": 0 00:30:33.721 }, 00:30:33.721 "claimed": false, 00:30:33.721 "zoned": false, 00:30:33.721 "supported_io_types": { 00:30:33.721 "read": true, 00:30:33.721 "write": true, 00:30:33.721 "unmap": true, 00:30:33.721 "flush": false, 00:30:33.721 "reset": true, 00:30:33.721 "nvme_admin": false, 00:30:33.721 "nvme_io": false, 00:30:33.721 "nvme_io_md": false, 00:30:33.721 "write_zeroes": true, 00:30:33.721 "zcopy": false, 00:30:33.721 "get_zone_info": false, 00:30:33.721 "zone_management": false, 00:30:33.721 "zone_append": false, 00:30:33.721 "compare": false, 00:30:33.721 "compare_and_write": false, 00:30:33.721 "abort": false, 00:30:33.721 "seek_hole": true, 00:30:33.721 "seek_data": true, 00:30:33.721 "copy": false, 00:30:33.721 "nvme_iov_md": false 00:30:33.721 }, 00:30:33.721 "driver_specific": { 00:30:33.721 "lvol": { 00:30:33.721 "lvol_store_uuid": "82ea626c-4a05-4a34-8e18-91446a073e65", 00:30:33.721 "base_bdev": "basen1", 00:30:33.721 "thin_provision": true, 00:30:33.721 "num_allocated_clusters": 0, 00:30:33.721 "snapshot": false, 00:30:33.721 "clone": false, 00:30:33.721 "esnap_clone": false 00:30:33.721 } 00:30:33.721 } 00:30:33.721 } 00:30:33.721 ]' 00:30:33.721 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:33.721 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:33.721 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:33.980 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:33.980 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:33.980 13:01:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:33.980 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:33.980 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:33.980 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:34.238 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:34.238 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:34.238 13:01:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:34.238 13:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:34.238 13:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:34.238 13:01:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f8a4249f-9222-4a19-8b54-deecd066a271 -c cachen1p0 --l2p_dram_limit 2 00:30:34.498 [2024-12-05 13:01:34.260966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.498 [2024-12-05 13:01:34.261033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:34.498 [2024-12-05 13:01:34.261049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:34.498 [2024-12-05 13:01:34.261059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.498 [2024-12-05 13:01:34.261115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.498 [2024-12-05 13:01:34.261126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:34.498 [2024-12-05 13:01:34.261133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:34.498 [2024-12-05 13:01:34.261143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.498 [2024-12-05 13:01:34.261160] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:34.498 [2024-12-05 13:01:34.261535] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:34.498 [2024-12-05 13:01:34.261548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.261556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:34.499 [2024-12-05 13:01:34.261562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:30:34.499 [2024-12-05 13:01:34.261571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.261598] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 57102aeb-cbe4-425d-bbdd-2f99276d7012 00:30:34.499 [2024-12-05 13:01:34.262907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.262948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:34.499 [2024-12-05 13:01:34.262959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:34.499 [2024-12-05 13:01:34.262965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.269699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.269730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:34.499 [2024-12-05 13:01:34.269747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.665 ms 00:30:34.499 [2024-12-05 13:01:34.269754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.269798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.269815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:34.499 [2024-12-05 13:01:34.269823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:34.499 [2024-12-05 13:01:34.269830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.269881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.269888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:34.499 [2024-12-05 13:01:34.269897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:34.499 [2024-12-05 13:01:34.269903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.269925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:34.499 [2024-12-05 13:01:34.271573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.271604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:34.499 [2024-12-05 13:01:34.271612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.656 ms 00:30:34.499 [2024-12-05 13:01:34.271620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.271645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.271654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:34.499 [2024-12-05 13:01:34.271660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:34.499 [2024-12-05 13:01:34.271671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.271686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:34.499 [2024-12-05 13:01:34.271826] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:34.499 [2024-12-05 13:01:34.271837] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:34.499 [2024-12-05 13:01:34.271848] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:34.499 [2024-12-05 13:01:34.271857] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:34.499 [2024-12-05 13:01:34.271877] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:34.499 [2024-12-05 13:01:34.271883] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:34.499 [2024-12-05 13:01:34.271893] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:34.499 [2024-12-05 13:01:34.271900] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:34.499 [2024-12-05 13:01:34.271907] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:34.499 [2024-12-05 13:01:34.271914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.271922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:34.499 [2024-12-05 13:01:34.271928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:30:34.499 [2024-12-05 13:01:34.271936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.272004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.499 [2024-12-05 13:01:34.272013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:34.499 [2024-12-05 13:01:34.272019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:34.499 [2024-12-05 13:01:34.272028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.499 [2024-12-05 13:01:34.272102] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:34.499 [2024-12-05 13:01:34.272112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:34.499 [2024-12-05 13:01:34.272118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:34.499 [2024-12-05 13:01:34.272126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:34.499 [2024-12-05 13:01:34.272139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:34.499 [2024-12-05 13:01:34.272153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:34.499 [2024-12-05 13:01:34.272158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:34.499 [2024-12-05 13:01:34.272164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:34.499 [2024-12-05 13:01:34.272176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:34.499 [2024-12-05 13:01:34.272181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:34.499 [2024-12-05 13:01:34.272194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:34.499 [2024-12-05 13:01:34.272200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:34.499 [2024-12-05 13:01:34.272212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:34.499 [2024-12-05 13:01:34.272217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:34.499 [2024-12-05 13:01:34.272231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:34.499 [2024-12-05 13:01:34.272239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.499 [2024-12-05 13:01:34.272246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:34.499 [2024-12-05 13:01:34.272253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:34.499 [2024-12-05 13:01:34.272259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.499 [2024-12-05 13:01:34.272266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:34.499 [2024-12-05 13:01:34.272272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:34.499 [2024-12-05 13:01:34.272280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.499 [2024-12-05 13:01:34.272286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:34.499 [2024-12-05 13:01:34.272295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:34.499 [2024-12-05 13:01:34.272301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:34.499 [2024-12-05 13:01:34.272309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:34.499 [2024-12-05 13:01:34.272317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:34.499 [2024-12-05 13:01:34.272326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:34.499 [2024-12-05 13:01:34.272340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:34.499 [2024-12-05 13:01:34.272346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:34.499 [2024-12-05 13:01:34.272360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:34.499 [2024-12-05 13:01:34.272381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:34.499 [2024-12-05 13:01:34.272387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272394] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:34.499 [2024-12-05 13:01:34.272402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:34.499 [2024-12-05 13:01:34.272411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:34.499 [2024-12-05 13:01:34.272419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:34.499 [2024-12-05 13:01:34.272427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:34.499 [2024-12-05 13:01:34.272433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:34.499 [2024-12-05 13:01:34.272440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:34.499 [2024-12-05 13:01:34.272447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:34.499 [2024-12-05 13:01:34.272454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:34.499 [2024-12-05 13:01:34.272461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:34.500 [2024-12-05 13:01:34.272471] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:34.500 [2024-12-05 13:01:34.272481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:34.500 [2024-12-05 13:01:34.272497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:34.500 [2024-12-05 13:01:34.272520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:34.500 [2024-12-05 13:01:34.272527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:34.500 [2024-12-05 13:01:34.272542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:34.500 [2024-12-05 13:01:34.272548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:34.500 [2024-12-05 13:01:34.272604] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:34.500 [2024-12-05 13:01:34.272611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:34.500 [2024-12-05 13:01:34.272626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:34.500 [2024-12-05 13:01:34.272635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:34.500 [2024-12-05 13:01:34.272641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:34.500 [2024-12-05 13:01:34.272649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.500 [2024-12-05 13:01:34.272659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:34.500 [2024-12-05 13:01:34.272669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.598 ms 00:30:34.500 [2024-12-05 13:01:34.272676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.500 [2024-12-05 13:01:34.272708] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:34.500 [2024-12-05 13:01:34.272716] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:37.026 [2024-12-05 13:01:36.460386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.460468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:37.027 [2024-12-05 13:01:36.460486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2187.655 ms 00:30:37.027 [2024-12-05 13:01:36.460495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.471336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.471396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:37.027 [2024-12-05 13:01:36.471413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.732 ms 00:30:37.027 [2024-12-05 13:01:36.471423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.471488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.471498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:37.027 [2024-12-05 13:01:36.471508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:37.027 [2024-12-05 13:01:36.471516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.482351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.482406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:37.027 [2024-12-05 13:01:36.482421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.760 ms 00:30:37.027 [2024-12-05 13:01:36.482433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.482483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.482491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:37.027 [2024-12-05 13:01:36.482502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:37.027 [2024-12-05 13:01:36.482510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.482990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.483020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:37.027 [2024-12-05 13:01:36.483033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.410 ms 00:30:37.027 [2024-12-05 13:01:36.483042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.483099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.483109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:37.027 [2024-12-05 13:01:36.483120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:37.027 [2024-12-05 13:01:36.483129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.490047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.490092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:37.027 [2024-12-05 13:01:36.490104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.892 ms 00:30:37.027 [2024-12-05 13:01:36.490113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.509938] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:37.027 [2024-12-05 13:01:36.511116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.511157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:37.027 [2024-12-05 13:01:36.511173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.914 ms 00:30:37.027 [2024-12-05 13:01:36.511185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.522698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.522788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:37.027 [2024-12-05 13:01:36.522816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.440 ms 00:30:37.027 [2024-12-05 13:01:36.522832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.522942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.522959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:37.027 [2024-12-05 13:01:36.522968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:30:37.027 [2024-12-05 13:01:36.522978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.525944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.526002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:37.027 [2024-12-05 13:01:36.526023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.946 ms 00:30:37.027 [2024-12-05 13:01:36.526034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.528350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.528394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:37.027 [2024-12-05 13:01:36.528404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.276 ms 00:30:37.027 [2024-12-05 13:01:36.528413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.528729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.528752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:37.027 [2024-12-05 13:01:36.528763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:30:37.027 [2024-12-05 13:01:36.528774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.553242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.553318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:37.027 [2024-12-05 13:01:36.553336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.413 ms 00:30:37.027 [2024-12-05 13:01:36.553347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.557795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.557867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:37.027 [2024-12-05 13:01:36.557879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.355 ms 00:30:37.027 [2024-12-05 13:01:36.557889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.561100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.561157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:37.027 [2024-12-05 13:01:36.561169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.172 ms 00:30:37.027 [2024-12-05 13:01:36.561179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.564212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.564266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:37.027 [2024-12-05 13:01:36.564277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.995 ms 00:30:37.027 [2024-12-05 13:01:36.564290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.564351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.564366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:37.027 [2024-12-05 13:01:36.564375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:37.027 [2024-12-05 13:01:36.564384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.564461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:37.027 [2024-12-05 13:01:36.564476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:37.027 [2024-12-05 13:01:36.564485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:37.027 [2024-12-05 13:01:36.564498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:37.027 [2024-12-05 13:01:36.565562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2304.165 ms, result 0 00:30:37.027 { 00:30:37.027 "name": "ftl", 00:30:37.027 "uuid": "57102aeb-cbe4-425d-bbdd-2f99276d7012" 00:30:37.027 } 00:30:37.027 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:37.027 [2024-12-05 13:01:36.734438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.027 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:37.392 13:01:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:37.392 [2024-12-05 13:01:37.126763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:37.392 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:37.651 [2024-12-05 13:01:37.299137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:37.651 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:37.910 Fill FTL, iteration 1 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:37.910 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91508 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91508 /var/tmp/spdk.tgt.sock 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91508 ']' 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:37.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.911 13:01:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:37.911 [2024-12-05 13:01:37.677437] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:37.911 [2024-12-05 13:01:37.677757] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91508 ] 00:30:38.171 [2024-12-05 13:01:37.832591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.171 [2024-12-05 13:01:37.858061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.738 13:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.738 13:01:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:38.738 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:38.997 ftln1 00:30:38.997 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:38.997 13:01:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91508 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91508 ']' 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91508 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91508 00:30:39.255 killing process with pid 91508 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91508' 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91508 00:30:39.255 13:01:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91508 00:30:39.822 13:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:39.822 13:01:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:39.822 [2024-12-05 13:01:39.451349] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:39.822 [2024-12-05 13:01:39.451767] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91537 ] 00:30:39.822 [2024-12-05 13:01:39.613203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.822 [2024-12-05 13:01:39.638970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.197  [2024-12-05T13:01:41.979Z] Copying: 219/1024 [MB] (219 MBps) [2024-12-05T13:01:42.909Z] Copying: 443/1024 [MB] (224 MBps) [2024-12-05T13:01:43.841Z] Copying: 683/1024 [MB] (240 MBps) [2024-12-05T13:01:44.098Z] Copying: 956/1024 [MB] (273 MBps) [2024-12-05T13:01:44.362Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:30:44.510 00:30:44.510 Calculate MD5 checksum, iteration 1 00:30:44.510 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:44.510 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:44.510 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:44.510 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:44.510 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:44.510 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:44.510 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:44.510 13:01:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:44.510 [2024-12-05 13:01:44.342712] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:44.510 [2024-12-05 13:01:44.342901] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91591 ] 00:30:44.768 [2024-12-05 13:01:44.500096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.768 [2024-12-05 13:01:44.535352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.161  [2024-12-05T13:01:46.271Z] Copying: 668/1024 [MB] (668 MBps) [2024-12-05T13:01:46.530Z] Copying: 1024/1024 [MB] (average 669 MBps) 00:30:46.678 00:30:46.678 13:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:46.678 13:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:49.204 Fill FTL, iteration 2 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=72b2481a53f126e0a3b5fc4068d662f1 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:49.204 13:01:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:49.204 [2024-12-05 13:01:48.716645] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:49.204 [2024-12-05 13:01:48.716783] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91642 ] 00:30:49.204 [2024-12-05 13:01:48.873800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.204 [2024-12-05 13:01:48.900107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.580  [2024-12-05T13:01:51.369Z] Copying: 224/1024 [MB] (224 MBps) [2024-12-05T13:01:52.307Z] Copying: 428/1024 [MB] (204 MBps) [2024-12-05T13:01:53.245Z] Copying: 626/1024 [MB] (198 MBps) [2024-12-05T13:01:54.181Z] Copying: 831/1024 [MB] (205 MBps) [2024-12-05T13:01:54.438Z] Copying: 1024/1024 [MB] (average 205 MBps) 00:30:54.586 00:30:54.586 Calculate MD5 checksum, iteration 2 00:30:54.586 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:54.586 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:54.586 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:54.586 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:54.586 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:54.586 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:54.586 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:54.586 13:01:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:54.586 [2024-12-05 13:01:54.344042] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:30:54.586 [2024-12-05 13:01:54.344191] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91700 ] 00:30:54.844 [2024-12-05 13:01:54.497702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.844 [2024-12-05 13:01:54.523895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.236  [2024-12-05T13:01:56.660Z] Copying: 652/1024 [MB] (652 MBps) [2024-12-05T13:01:57.227Z] Copying: 1024/1024 [MB] (average 661 MBps) 00:30:57.375 00:30:57.375 13:01:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:57.375 13:01:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:59.273 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:59.273 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a370dcd3d0e303635b1a1d547d836098 00:30:59.273 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:59.273 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:59.273 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:59.531 [2024-12-05 13:01:59.218167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.531 [2024-12-05 13:01:59.218227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:59.531 [2024-12-05 13:01:59.218243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:59.531 [2024-12-05 13:01:59.218254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.531 [2024-12-05 13:01:59.218281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.531 [2024-12-05 13:01:59.218291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:59.531 [2024-12-05 13:01:59.218300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:59.531 [2024-12-05 13:01:59.218308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.531 [2024-12-05 13:01:59.218329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.531 [2024-12-05 13:01:59.218337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:59.531 [2024-12-05 13:01:59.218349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:59.531 [2024-12-05 13:01:59.218357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.531 [2024-12-05 13:01:59.218428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.256 ms, result 0 00:30:59.531 true 00:30:59.531 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:59.789 { 00:30:59.789 "name": "ftl", 00:30:59.789 "properties": [ 00:30:59.789 { 00:30:59.789 "name": "superblock_version", 00:30:59.789 "value": 5, 00:30:59.789 "read-only": true 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "name": "base_device", 00:30:59.789 "bands": [ 00:30:59.789 { 00:30:59.789 "id": 0, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 1, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 2, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 3, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 4, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 5, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 6, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 7, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 8, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 9, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 10, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 11, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 12, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 13, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 14, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 15, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 16, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 17, 00:30:59.789 "state": "FREE", 00:30:59.789 "validity": 0.0 00:30:59.789 } 00:30:59.789 ], 00:30:59.789 "read-only": true 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "name": "cache_device", 00:30:59.789 "type": "bdev", 00:30:59.789 "chunks": [ 00:30:59.789 { 00:30:59.789 "id": 0, 00:30:59.789 "state": "INACTIVE", 00:30:59.789 "utilization": 0.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 1, 00:30:59.789 "state": "CLOSED", 00:30:59.789 "utilization": 1.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 2, 00:30:59.789 "state": "CLOSED", 00:30:59.789 "utilization": 1.0 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 3, 00:30:59.789 "state": "OPEN", 00:30:59.789 "utilization": 0.001953125 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "id": 4, 00:30:59.789 "state": "OPEN", 00:30:59.789 "utilization": 0.0 00:30:59.789 } 00:30:59.789 ], 00:30:59.789 "read-only": true 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "name": "verbose_mode", 00:30:59.789 "value": true, 00:30:59.789 "unit": "", 00:30:59.789 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:59.789 }, 00:30:59.789 { 00:30:59.789 "name": "prep_upgrade_on_shutdown", 00:30:59.789 "value": false, 00:30:59.789 "unit": "", 00:30:59.789 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:59.789 } 00:30:59.789 ] 00:30:59.789 } 00:30:59.789 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:00.048 [2024-12-05 13:01:59.650567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.048 [2024-12-05 13:01:59.650625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:00.048 [2024-12-05 13:01:59.650638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:00.048 [2024-12-05 13:01:59.650645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.048 [2024-12-05 13:01:59.650665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.048 [2024-12-05 13:01:59.650673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:00.048 [2024-12-05 13:01:59.650680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:00.048 [2024-12-05 13:01:59.650686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.048 [2024-12-05 13:01:59.650703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.048 [2024-12-05 13:01:59.650710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:00.048 [2024-12-05 13:01:59.650717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:00.048 [2024-12-05 13:01:59.650723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.048 [2024-12-05 13:01:59.650774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.207 ms, result 0 00:31:00.048 true 00:31:00.048 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:00.048 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:00.048 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:00.048 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:00.048 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:00.048 13:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:00.305 [2024-12-05 13:02:00.074943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.305 [2024-12-05 13:02:00.074997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:00.305 [2024-12-05 13:02:00.075009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:00.305 [2024-12-05 13:02:00.075016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.305 [2024-12-05 13:02:00.075035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.305 [2024-12-05 13:02:00.075042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:00.305 [2024-12-05 13:02:00.075049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:00.305 [2024-12-05 13:02:00.075056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.305 [2024-12-05 13:02:00.075071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.305 [2024-12-05 13:02:00.075078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:00.305 [2024-12-05 13:02:00.075085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:00.305 [2024-12-05 13:02:00.075091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.305 [2024-12-05 13:02:00.075142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.195 ms, result 0 00:31:00.305 true 00:31:00.305 13:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:00.563 { 00:31:00.563 "name": "ftl", 00:31:00.563 "properties": [ 00:31:00.563 { 00:31:00.563 "name": "superblock_version", 00:31:00.563 "value": 5, 00:31:00.563 "read-only": true 00:31:00.563 }, 00:31:00.563 { 00:31:00.563 "name": "base_device", 00:31:00.563 "bands": [ 00:31:00.563 { 00:31:00.563 "id": 0, 00:31:00.563 "state": "FREE", 00:31:00.563 "validity": 0.0 00:31:00.563 }, 00:31:00.563 { 00:31:00.563 "id": 1, 00:31:00.563 "state": "FREE", 00:31:00.563 "validity": 0.0 00:31:00.563 }, 00:31:00.563 { 00:31:00.563 "id": 2, 00:31:00.563 "state": "FREE", 00:31:00.563 "validity": 0.0 00:31:00.563 }, 00:31:00.563 { 00:31:00.563 "id": 3, 00:31:00.563 "state": "FREE", 00:31:00.563 "validity": 0.0 00:31:00.563 }, 00:31:00.563 { 00:31:00.563 "id": 4, 00:31:00.563 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 5, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 6, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 7, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 8, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 9, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 10, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 11, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 12, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 13, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 14, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 15, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 16, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 17, 00:31:00.564 "state": "FREE", 00:31:00.564 "validity": 0.0 00:31:00.564 } 00:31:00.564 ], 00:31:00.564 "read-only": true 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "name": "cache_device", 00:31:00.564 "type": "bdev", 00:31:00.564 "chunks": [ 00:31:00.564 { 00:31:00.564 "id": 0, 00:31:00.564 "state": "INACTIVE", 00:31:00.564 "utilization": 0.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 1, 00:31:00.564 "state": "CLOSED", 00:31:00.564 "utilization": 1.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 2, 00:31:00.564 "state": "CLOSED", 00:31:00.564 "utilization": 1.0 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 3, 00:31:00.564 "state": "OPEN", 00:31:00.564 "utilization": 0.001953125 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "id": 4, 00:31:00.564 "state": "OPEN", 00:31:00.564 "utilization": 0.0 00:31:00.564 } 00:31:00.564 ], 00:31:00.564 "read-only": true 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "name": "verbose_mode", 00:31:00.564 "value": true, 00:31:00.564 "unit": "", 00:31:00.564 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:00.564 }, 00:31:00.564 { 00:31:00.564 "name": "prep_upgrade_on_shutdown", 00:31:00.564 "value": true, 00:31:00.564 "unit": "", 00:31:00.564 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:00.564 } 00:31:00.564 ] 00:31:00.564 } 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91386 ]] 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91386 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91386 ']' 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91386 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91386 00:31:00.564 killing process with pid 91386 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91386' 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91386 00:31:00.564 13:02:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91386 00:31:00.822 [2024-12-05 13:02:00.449796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:00.822 [2024-12-05 13:02:00.455167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.822 [2024-12-05 13:02:00.455215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:00.822 [2024-12-05 13:02:00.455227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:00.822 [2024-12-05 13:02:00.455234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.822 [2024-12-05 13:02:00.455255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:00.822 [2024-12-05 13:02:00.455776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.822 [2024-12-05 13:02:00.455803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:00.822 [2024-12-05 13:02:00.455883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.509 ms 00:31:00.822 [2024-12-05 13:02:00.455896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.509422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.509506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:08.930 [2024-12-05 13:02:08.509522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8053.467 ms 00:31:08.930 [2024-12-05 13:02:08.509532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.510829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.510852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:08.930 [2024-12-05 13:02:08.510863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.279 ms 00:31:08.930 [2024-12-05 13:02:08.510873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.512001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.512031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:08.930 [2024-12-05 13:02:08.512042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.101 ms 00:31:08.930 [2024-12-05 13:02:08.512051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.513822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.513884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:08.930 [2024-12-05 13:02:08.513897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.673 ms 00:31:08.930 [2024-12-05 13:02:08.513905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.516011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.516046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:08.930 [2024-12-05 13:02:08.516058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.050 ms 00:31:08.930 [2024-12-05 13:02:08.516071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.516194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.516211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:08.930 [2024-12-05 13:02:08.516233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:31:08.930 [2024-12-05 13:02:08.516245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.517221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.517250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:08.930 [2024-12-05 13:02:08.517259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.960 ms 00:31:08.930 [2024-12-05 13:02:08.517267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.518199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.518229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:08.930 [2024-12-05 13:02:08.518238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.872 ms 00:31:08.930 [2024-12-05 13:02:08.518246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.519101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.519131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:08.930 [2024-12-05 13:02:08.519140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.801 ms 00:31:08.930 [2024-12-05 13:02:08.519148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.520103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.930 [2024-12-05 13:02:08.520133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:08.930 [2024-12-05 13:02:08.520142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.891 ms 00:31:08.930 [2024-12-05 13:02:08.520149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.930 [2024-12-05 13:02:08.520184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:08.930 [2024-12-05 13:02:08.520198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:08.930 [2024-12-05 13:02:08.520209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:08.930 [2024-12-05 13:02:08.520218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:08.930 [2024-12-05 13:02:08.520226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:08.930 [2024-12-05 13:02:08.520234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:08.930 [2024-12-05 13:02:08.520241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:08.930 [2024-12-05 13:02:08.520249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:08.930 [2024-12-05 13:02:08.520257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:08.930 [2024-12-05 13:02:08.520265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:08.930 [2024-12-05 13:02:08.520273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:08.930 [2024-12-05 13:02:08.520280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:08.931 [2024-12-05 13:02:08.520287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:08.931 [2024-12-05 13:02:08.520294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:08.931 [2024-12-05 13:02:08.520303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:08.931 [2024-12-05 13:02:08.520310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:08.931 [2024-12-05 13:02:08.520317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:08.931 [2024-12-05 13:02:08.520324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:08.931 [2024-12-05 13:02:08.520332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:08.931 [2024-12-05 13:02:08.520342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:08.931 [2024-12-05 13:02:08.520350] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 57102aeb-cbe4-425d-bbdd-2f99276d7012 00:31:08.931 [2024-12-05 13:02:08.520359] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:08.931 [2024-12-05 13:02:08.520380] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:08.931 [2024-12-05 13:02:08.520388] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:08.931 [2024-12-05 13:02:08.520397] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:08.931 [2024-12-05 13:02:08.520404] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:08.931 [2024-12-05 13:02:08.520412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:08.931 [2024-12-05 13:02:08.520420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:08.931 [2024-12-05 13:02:08.520426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:08.931 [2024-12-05 13:02:08.520432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:08.931 [2024-12-05 13:02:08.520439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.931 [2024-12-05 13:02:08.520446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:08.931 [2024-12-05 13:02:08.520454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:31:08.931 [2024-12-05 13:02:08.520462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.522311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.931 [2024-12-05 13:02:08.522341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:08.931 [2024-12-05 13:02:08.522352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.823 ms 00:31:08.931 [2024-12-05 13:02:08.522361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.522453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.931 [2024-12-05 13:02:08.522463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:08.931 [2024-12-05 13:02:08.522472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:31:08.931 [2024-12-05 13:02:08.522480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.528874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.528918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:08.931 [2024-12-05 13:02:08.528930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.528938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.528972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.528982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:08.931 [2024-12-05 13:02:08.528998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.529006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.529065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.529075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:08.931 [2024-12-05 13:02:08.529083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.529091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.529111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.529128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:08.931 [2024-12-05 13:02:08.529136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.529143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.541353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.541416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:08.931 [2024-12-05 13:02:08.541428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.541437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.550496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.550561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:08.931 [2024-12-05 13:02:08.550573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.550581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.550669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.550689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:08.931 [2024-12-05 13:02:08.550698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.550705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.550738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.550747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:08.931 [2024-12-05 13:02:08.550755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.550763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.550851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.550871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:08.931 [2024-12-05 13:02:08.550882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.550890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.550921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.550930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:08.931 [2024-12-05 13:02:08.550939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.550948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.550988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.550997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:08.931 [2024-12-05 13:02:08.551008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.551015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.551062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:08.931 [2024-12-05 13:02:08.551072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:08.931 [2024-12-05 13:02:08.551080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:08.931 [2024-12-05 13:02:08.551087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.931 [2024-12-05 13:02:08.551223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8095.992 ms, result 0 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:17.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91876 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91876 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91876 ']' 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.049 13:02:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:17.049 [2024-12-05 13:02:16.380154] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:17.049 [2024-12-05 13:02:16.380372] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91876 ] 00:31:17.049 [2024-12-05 13:02:16.555475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.049 [2024-12-05 13:02:16.582499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.049 [2024-12-05 13:02:16.887774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:17.049 [2024-12-05 13:02:16.887872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:17.308 [2024-12-05 13:02:17.032753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.032839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:17.308 [2024-12-05 13:02:17.032858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:17.308 [2024-12-05 13:02:17.032867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.032935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.032946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:17.308 [2024-12-05 13:02:17.032957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:17.308 [2024-12-05 13:02:17.032965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.032995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:17.308 [2024-12-05 13:02:17.033630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:17.308 [2024-12-05 13:02:17.033676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.033688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:17.308 [2024-12-05 13:02:17.033699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:31:17.308 [2024-12-05 13:02:17.033707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.035260] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:17.308 [2024-12-05 13:02:17.037881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.037920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:17.308 [2024-12-05 13:02:17.037931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.622 ms 00:31:17.308 [2024-12-05 13:02:17.037939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.038005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.038016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:17.308 [2024-12-05 13:02:17.038029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:17.308 [2024-12-05 13:02:17.038040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.044389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.044429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:17.308 [2024-12-05 13:02:17.044440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.295 ms 00:31:17.308 [2024-12-05 13:02:17.044449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.044509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.044519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:17.308 [2024-12-05 13:02:17.044528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:17.308 [2024-12-05 13:02:17.044535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.044588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.044601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:17.308 [2024-12-05 13:02:17.044609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:17.308 [2024-12-05 13:02:17.044617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.044643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:17.308 [2024-12-05 13:02:17.046294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.046323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:17.308 [2024-12-05 13:02:17.046333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.660 ms 00:31:17.308 [2024-12-05 13:02:17.046341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.046380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.046389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:17.308 [2024-12-05 13:02:17.046398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:17.308 [2024-12-05 13:02:17.046406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.046428] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:17.308 [2024-12-05 13:02:17.046450] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:17.308 [2024-12-05 13:02:17.046487] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:17.308 [2024-12-05 13:02:17.046505] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:17.308 [2024-12-05 13:02:17.046611] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:17.308 [2024-12-05 13:02:17.046631] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:17.308 [2024-12-05 13:02:17.046642] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:17.308 [2024-12-05 13:02:17.046652] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:17.308 [2024-12-05 13:02:17.046662] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:17.308 [2024-12-05 13:02:17.046671] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:17.308 [2024-12-05 13:02:17.046678] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:17.308 [2024-12-05 13:02:17.046694] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:17.308 [2024-12-05 13:02:17.046701] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:17.308 [2024-12-05 13:02:17.046711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.046724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:17.308 [2024-12-05 13:02:17.046732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:31:17.308 [2024-12-05 13:02:17.046741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.046837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.308 [2024-12-05 13:02:17.046851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:17.308 [2024-12-05 13:02:17.046859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:31:17.308 [2024-12-05 13:02:17.046867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.308 [2024-12-05 13:02:17.046985] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:17.308 [2024-12-05 13:02:17.046996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:17.308 [2024-12-05 13:02:17.047008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:17.308 [2024-12-05 13:02:17.047018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.308 [2024-12-05 13:02:17.047027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:17.308 [2024-12-05 13:02:17.047036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:17.308 [2024-12-05 13:02:17.047044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:17.308 [2024-12-05 13:02:17.047052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:17.308 [2024-12-05 13:02:17.047060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:17.308 [2024-12-05 13:02:17.047067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.308 [2024-12-05 13:02:17.047075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:17.308 [2024-12-05 13:02:17.047083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:17.308 [2024-12-05 13:02:17.047091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.308 [2024-12-05 13:02:17.047106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:17.308 [2024-12-05 13:02:17.047114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:17.308 [2024-12-05 13:02:17.047127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.308 [2024-12-05 13:02:17.047136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:17.308 [2024-12-05 13:02:17.047143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:17.308 [2024-12-05 13:02:17.047150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.308 [2024-12-05 13:02:17.047158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:17.308 [2024-12-05 13:02:17.047168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:17.308 [2024-12-05 13:02:17.047177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:17.308 [2024-12-05 13:02:17.047184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:17.308 [2024-12-05 13:02:17.047191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:17.308 [2024-12-05 13:02:17.047198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:17.308 [2024-12-05 13:02:17.047205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:17.308 [2024-12-05 13:02:17.047211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:17.308 [2024-12-05 13:02:17.047219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:17.308 [2024-12-05 13:02:17.047226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:17.308 [2024-12-05 13:02:17.047233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:17.308 [2024-12-05 13:02:17.047239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:17.308 [2024-12-05 13:02:17.047248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:17.308 [2024-12-05 13:02:17.047255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:17.308 [2024-12-05 13:02:17.047262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.308 [2024-12-05 13:02:17.047268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:17.309 [2024-12-05 13:02:17.047276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:17.309 [2024-12-05 13:02:17.047282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.309 [2024-12-05 13:02:17.047288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:17.309 [2024-12-05 13:02:17.047294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:17.309 [2024-12-05 13:02:17.047301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.309 [2024-12-05 13:02:17.047307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:17.309 [2024-12-05 13:02:17.047313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:17.309 [2024-12-05 13:02:17.047320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.309 [2024-12-05 13:02:17.047326] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:17.309 [2024-12-05 13:02:17.047334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:17.309 [2024-12-05 13:02:17.047344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:17.309 [2024-12-05 13:02:17.047351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:17.309 [2024-12-05 13:02:17.047361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:17.309 [2024-12-05 13:02:17.047368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:17.309 [2024-12-05 13:02:17.047375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:17.309 [2024-12-05 13:02:17.047382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:17.309 [2024-12-05 13:02:17.047388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:17.309 [2024-12-05 13:02:17.047397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:17.309 [2024-12-05 13:02:17.047406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:17.309 [2024-12-05 13:02:17.047415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:17.309 [2024-12-05 13:02:17.047432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:17.309 [2024-12-05 13:02:17.047453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:17.309 [2024-12-05 13:02:17.047461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:17.309 [2024-12-05 13:02:17.047468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:17.309 [2024-12-05 13:02:17.047475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:17.309 [2024-12-05 13:02:17.047525] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:17.309 [2024-12-05 13:02:17.047537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:17.309 [2024-12-05 13:02:17.047552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:17.309 [2024-12-05 13:02:17.047559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:17.309 [2024-12-05 13:02:17.047567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:17.309 [2024-12-05 13:02:17.047574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:17.309 [2024-12-05 13:02:17.047587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:17.309 [2024-12-05 13:02:17.047594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.659 ms 00:31:17.309 [2024-12-05 13:02:17.047601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:17.309 [2024-12-05 13:02:17.047645] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:17.309 [2024-12-05 13:02:17.047654] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:19.210 [2024-12-05 13:02:18.987363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.210 [2024-12-05 13:02:18.987443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:19.210 [2024-12-05 13:02:18.987473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1939.710 ms 00:31:19.210 [2024-12-05 13:02:18.987486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.210 [2024-12-05 13:02:18.997910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.210 [2024-12-05 13:02:18.997968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:19.210 [2024-12-05 13:02:18.997981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.343 ms 00:31:19.210 [2024-12-05 13:02:18.997991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.210 [2024-12-05 13:02:18.998051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.210 [2024-12-05 13:02:18.998060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:19.210 [2024-12-05 13:02:18.998070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:19.210 [2024-12-05 13:02:18.998083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.210 [2024-12-05 13:02:19.008534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.210 [2024-12-05 13:02:19.008583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:19.211 [2024-12-05 13:02:19.008596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.392 ms 00:31:19.211 [2024-12-05 13:02:19.008604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.008648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.008656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:19.211 [2024-12-05 13:02:19.008669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:19.211 [2024-12-05 13:02:19.008677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.009138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.009171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:19.211 [2024-12-05 13:02:19.009181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:31:19.211 [2024-12-05 13:02:19.009196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.009248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.009258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:19.211 [2024-12-05 13:02:19.009268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:19.211 [2024-12-05 13:02:19.009280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.015953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.015984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:19.211 [2024-12-05 13:02:19.016003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.648 ms 00:31:19.211 [2024-12-05 13:02:19.016012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.027618] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:19.211 [2024-12-05 13:02:19.027682] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:19.211 [2024-12-05 13:02:19.027708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.027722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:19.211 [2024-12-05 13:02:19.027737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.608 ms 00:31:19.211 [2024-12-05 13:02:19.027749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.033387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.033435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:19.211 [2024-12-05 13:02:19.033451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.596 ms 00:31:19.211 [2024-12-05 13:02:19.033464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.035275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.035320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:19.211 [2024-12-05 13:02:19.035334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.756 ms 00:31:19.211 [2024-12-05 13:02:19.035345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.037017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.037060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:19.211 [2024-12-05 13:02:19.037074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.639 ms 00:31:19.211 [2024-12-05 13:02:19.037086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.037572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.037602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:19.211 [2024-12-05 13:02:19.037617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:31:19.211 [2024-12-05 13:02:19.037629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.211 [2024-12-05 13:02:19.054634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.211 [2024-12-05 13:02:19.054694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:19.211 [2024-12-05 13:02:19.054707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.972 ms 00:31:19.211 [2024-12-05 13:02:19.054716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.470 [2024-12-05 13:02:19.062484] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:19.470 [2024-12-05 13:02:19.063402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.470 [2024-12-05 13:02:19.063433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:19.470 [2024-12-05 13:02:19.063446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.637 ms 00:31:19.470 [2024-12-05 13:02:19.063455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.470 [2024-12-05 13:02:19.063525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.470 [2024-12-05 13:02:19.063536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:19.470 [2024-12-05 13:02:19.063545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:19.470 [2024-12-05 13:02:19.063553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.470 [2024-12-05 13:02:19.063620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.470 [2024-12-05 13:02:19.063631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:19.470 [2024-12-05 13:02:19.063643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:19.470 [2024-12-05 13:02:19.063651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.470 [2024-12-05 13:02:19.063673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.470 [2024-12-05 13:02:19.063681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:19.470 [2024-12-05 13:02:19.063690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:19.470 [2024-12-05 13:02:19.063704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.470 [2024-12-05 13:02:19.063739] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:19.470 [2024-12-05 13:02:19.063749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.470 [2024-12-05 13:02:19.063758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:19.470 [2024-12-05 13:02:19.063766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:19.470 [2024-12-05 13:02:19.063776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.470 [2024-12-05 13:02:19.067429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.470 [2024-12-05 13:02:19.067466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:19.470 [2024-12-05 13:02:19.067477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.633 ms 00:31:19.470 [2024-12-05 13:02:19.067485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.470 [2024-12-05 13:02:19.067564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.470 [2024-12-05 13:02:19.067574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:19.470 [2024-12-05 13:02:19.067588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:31:19.470 [2024-12-05 13:02:19.067598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.470 [2024-12-05 13:02:19.068631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2035.463 ms, result 0 00:31:19.470 [2024-12-05 13:02:19.083830] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.470 [2024-12-05 13:02:19.099817] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:19.470 [2024-12-05 13:02:19.107931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:19.470 13:02:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.470 13:02:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:19.470 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:19.470 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:19.470 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:19.728 [2024-12-05 13:02:19.328073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.728 [2024-12-05 13:02:19.328136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:19.728 [2024-12-05 13:02:19.328151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:19.728 [2024-12-05 13:02:19.328159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.728 [2024-12-05 13:02:19.328182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.728 [2024-12-05 13:02:19.328191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:19.728 [2024-12-05 13:02:19.328202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:19.728 [2024-12-05 13:02:19.328210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.728 [2024-12-05 13:02:19.328230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.728 [2024-12-05 13:02:19.328238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:19.728 [2024-12-05 13:02:19.328246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:19.728 [2024-12-05 13:02:19.328253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.728 [2024-12-05 13:02:19.328315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.238 ms, result 0 00:31:19.728 true 00:31:19.728 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:19.728 { 00:31:19.728 "name": "ftl", 00:31:19.728 "properties": [ 00:31:19.728 { 00:31:19.728 "name": "superblock_version", 00:31:19.728 "value": 5, 00:31:19.728 "read-only": true 00:31:19.728 }, 00:31:19.728 { 00:31:19.728 "name": "base_device", 00:31:19.729 "bands": [ 00:31:19.729 { 00:31:19.729 "id": 0, 00:31:19.729 "state": "CLOSED", 00:31:19.729 "validity": 1.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 1, 00:31:19.729 "state": "CLOSED", 00:31:19.729 "validity": 1.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 2, 00:31:19.729 "state": "CLOSED", 00:31:19.729 "validity": 0.007843137254901933 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 3, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 4, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 5, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 6, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 7, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 8, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 9, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 10, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 11, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 12, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 13, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 14, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 15, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 16, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 17, 00:31:19.729 "state": "FREE", 00:31:19.729 "validity": 0.0 00:31:19.729 } 00:31:19.729 ], 00:31:19.729 "read-only": true 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "name": "cache_device", 00:31:19.729 "type": "bdev", 00:31:19.729 "chunks": [ 00:31:19.729 { 00:31:19.729 "id": 0, 00:31:19.729 "state": "INACTIVE", 00:31:19.729 "utilization": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 1, 00:31:19.729 "state": "OPEN", 00:31:19.729 "utilization": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 2, 00:31:19.729 "state": "OPEN", 00:31:19.729 "utilization": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 3, 00:31:19.729 "state": "FREE", 00:31:19.729 "utilization": 0.0 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "id": 4, 00:31:19.729 "state": "FREE", 00:31:19.729 "utilization": 0.0 00:31:19.729 } 00:31:19.729 ], 00:31:19.729 "read-only": true 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "name": "verbose_mode", 00:31:19.729 "value": true, 00:31:19.729 "unit": "", 00:31:19.729 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:19.729 }, 00:31:19.729 { 00:31:19.729 "name": "prep_upgrade_on_shutdown", 00:31:19.729 "value": false, 00:31:19.729 "unit": "", 00:31:19.729 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:19.729 } 00:31:19.729 ] 00:31:19.729 } 00:31:19.729 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:19.729 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:19.729 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:19.988 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:19.988 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:19.988 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:19.988 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:19.988 13:02:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:20.246 Validate MD5 checksum, iteration 1 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:20.246 13:02:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:20.246 [2024-12-05 13:02:20.072792] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:20.246 [2024-12-05 13:02:20.072948] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91926 ] 00:31:20.503 [2024-12-05 13:02:20.232158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.503 [2024-12-05 13:02:20.256352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.941  [2024-12-05T13:02:22.358Z] Copying: 643/1024 [MB] (643 MBps) [2024-12-05T13:02:22.923Z] Copying: 1024/1024 [MB] (average 633 MBps) 00:31:23.071 00:31:23.071 13:02:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:23.071 13:02:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:25.594 Validate MD5 checksum, iteration 2 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=72b2481a53f126e0a3b5fc4068d662f1 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 72b2481a53f126e0a3b5fc4068d662f1 != \7\2\b\2\4\8\1\a\5\3\f\1\2\6\e\0\a\3\b\5\f\c\4\0\6\8\d\6\6\2\f\1 ]] 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:25.594 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:25.595 13:02:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:25.595 [2024-12-05 13:02:25.065955] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:25.595 [2024-12-05 13:02:25.066138] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91983 ] 00:31:25.595 [2024-12-05 13:02:25.234240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.595 [2024-12-05 13:02:25.261922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.025  [2024-12-05T13:02:27.440Z] Copying: 711/1024 [MB] (711 MBps) [2024-12-05T13:02:31.629Z] Copying: 1024/1024 [MB] (average 666 MBps) 00:31:31.777 00:31:31.777 13:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:31.777 13:02:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a370dcd3d0e303635b1a1d547d836098 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a370dcd3d0e303635b1a1d547d836098 != \a\3\7\0\d\c\d\3\d\0\e\3\0\3\6\3\5\b\1\a\1\d\5\4\7\d\8\3\6\0\9\8 ]] 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 91876 ]] 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 91876 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92083 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92083 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 92083 ']' 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.307 13:02:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:34.307 [2024-12-05 13:02:33.843981] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:34.307 [2024-12-05 13:02:33.844129] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92083 ] 00:31:34.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 91876 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:34.307 [2024-12-05 13:02:33.991418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.307 [2024-12-05 13:02:34.014985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.565 [2024-12-05 13:02:34.312136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:34.565 [2024-12-05 13:02:34.312210] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:34.824 [2024-12-05 13:02:34.454772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.824 [2024-12-05 13:02:34.454856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:34.824 [2024-12-05 13:02:34.454874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:34.824 [2024-12-05 13:02:34.454883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.824 [2024-12-05 13:02:34.454961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.824 [2024-12-05 13:02:34.454973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:34.824 [2024-12-05 13:02:34.454984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:34.824 [2024-12-05 13:02:34.454992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.824 [2024-12-05 13:02:34.455020] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:34.824 [2024-12-05 13:02:34.455327] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:34.824 [2024-12-05 13:02:34.455343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.824 [2024-12-05 13:02:34.455351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:34.824 [2024-12-05 13:02:34.455360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:31:34.824 [2024-12-05 13:02:34.455368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.824 [2024-12-05 13:02:34.455755] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:34.824 [2024-12-05 13:02:34.459626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.824 [2024-12-05 13:02:34.459667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:34.824 [2024-12-05 13:02:34.459679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.872 ms 00:31:34.824 [2024-12-05 13:02:34.459688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.824 [2024-12-05 13:02:34.460982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.824 [2024-12-05 13:02:34.461014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:34.824 [2024-12-05 13:02:34.461025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:31:34.825 [2024-12-05 13:02:34.461035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.825 [2024-12-05 13:02:34.461315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.825 [2024-12-05 13:02:34.461327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:34.825 [2024-12-05 13:02:34.461336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.228 ms 00:31:34.825 [2024-12-05 13:02:34.461344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.825 [2024-12-05 13:02:34.461382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.825 [2024-12-05 13:02:34.461391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:34.825 [2024-12-05 13:02:34.461404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:34.825 [2024-12-05 13:02:34.461411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.825 [2024-12-05 13:02:34.461446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.825 [2024-12-05 13:02:34.461458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:34.825 [2024-12-05 13:02:34.461470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:34.825 [2024-12-05 13:02:34.461478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.825 [2024-12-05 13:02:34.461502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:34.825 [2024-12-05 13:02:34.462761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.825 [2024-12-05 13:02:34.462787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:34.825 [2024-12-05 13:02:34.462797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.266 ms 00:31:34.825 [2024-12-05 13:02:34.462821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.825 [2024-12-05 13:02:34.462859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.825 [2024-12-05 13:02:34.462871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:34.825 [2024-12-05 13:02:34.462880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:34.825 [2024-12-05 13:02:34.462887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.825 [2024-12-05 13:02:34.462937] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:34.825 [2024-12-05 13:02:34.462966] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:34.825 [2024-12-05 13:02:34.463005] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:34.825 [2024-12-05 13:02:34.463028] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:34.825 [2024-12-05 13:02:34.463137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:34.825 [2024-12-05 13:02:34.463152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:34.825 [2024-12-05 13:02:34.463166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:34.825 [2024-12-05 13:02:34.463177] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463186] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463195] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:34.825 [2024-12-05 13:02:34.463205] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:34.825 [2024-12-05 13:02:34.463213] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:34.825 [2024-12-05 13:02:34.463220] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:34.825 [2024-12-05 13:02:34.463228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.825 [2024-12-05 13:02:34.463235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:34.825 [2024-12-05 13:02:34.463245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:31:34.825 [2024-12-05 13:02:34.463252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.825 [2024-12-05 13:02:34.463341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.825 [2024-12-05 13:02:34.463354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:34.825 [2024-12-05 13:02:34.463364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:34.825 [2024-12-05 13:02:34.463371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.825 [2024-12-05 13:02:34.463475] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:34.825 [2024-12-05 13:02:34.463489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:34.825 [2024-12-05 13:02:34.463499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:34.825 [2024-12-05 13:02:34.463527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:34.825 [2024-12-05 13:02:34.463545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:34.825 [2024-12-05 13:02:34.463554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:34.825 [2024-12-05 13:02:34.463562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:34.825 [2024-12-05 13:02:34.463585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:34.825 [2024-12-05 13:02:34.463592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:34.825 [2024-12-05 13:02:34.463617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:34.825 [2024-12-05 13:02:34.463624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:34.825 [2024-12-05 13:02:34.463640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:34.825 [2024-12-05 13:02:34.463647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:34.825 [2024-12-05 13:02:34.463664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:34.825 [2024-12-05 13:02:34.463671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:34.825 [2024-12-05 13:02:34.463687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:34.825 [2024-12-05 13:02:34.463694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:34.825 [2024-12-05 13:02:34.463710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:34.825 [2024-12-05 13:02:34.463717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:34.825 [2024-12-05 13:02:34.463734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:34.825 [2024-12-05 13:02:34.463743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:34.825 [2024-12-05 13:02:34.463758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:34.825 [2024-12-05 13:02:34.463766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:34.825 [2024-12-05 13:02:34.463782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:34.825 [2024-12-05 13:02:34.463826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:34.825 [2024-12-05 13:02:34.463850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:34.825 [2024-12-05 13:02:34.463858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463865] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:34.825 [2024-12-05 13:02:34.463873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:34.825 [2024-12-05 13:02:34.463884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:34.825 [2024-12-05 13:02:34.463902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:34.825 [2024-12-05 13:02:34.463909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:34.825 [2024-12-05 13:02:34.463916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:34.825 [2024-12-05 13:02:34.463923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:34.825 [2024-12-05 13:02:34.463930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:34.825 [2024-12-05 13:02:34.463936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:34.825 [2024-12-05 13:02:34.463945] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:34.825 [2024-12-05 13:02:34.463954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:34.825 [2024-12-05 13:02:34.463966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:34.825 [2024-12-05 13:02:34.463974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:34.825 [2024-12-05 13:02:34.463981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:34.825 [2024-12-05 13:02:34.463989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:34.825 [2024-12-05 13:02:34.463996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:34.826 [2024-12-05 13:02:34.464004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:34.826 [2024-12-05 13:02:34.464010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:34.826 [2024-12-05 13:02:34.464019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:34.826 [2024-12-05 13:02:34.464027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:34.826 [2024-12-05 13:02:34.464034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:34.826 [2024-12-05 13:02:34.464041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:34.826 [2024-12-05 13:02:34.464048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:34.826 [2024-12-05 13:02:34.464054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:34.826 [2024-12-05 13:02:34.464061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:34.826 [2024-12-05 13:02:34.464068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:34.826 [2024-12-05 13:02:34.464076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:34.826 [2024-12-05 13:02:34.464090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:34.826 [2024-12-05 13:02:34.464098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:34.826 [2024-12-05 13:02:34.464105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:34.826 [2024-12-05 13:02:34.464113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:34.826 [2024-12-05 13:02:34.464120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.464130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:34.826 [2024-12-05 13:02:34.464138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.715 ms 00:31:34.826 [2024-12-05 13:02:34.464148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.474232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.474284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:34.826 [2024-12-05 13:02:34.474303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.030 ms 00:31:34.826 [2024-12-05 13:02:34.474311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.474378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.474387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:34.826 [2024-12-05 13:02:34.474396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:34.826 [2024-12-05 13:02:34.474408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.485790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.485861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:34.826 [2024-12-05 13:02:34.485873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.299 ms 00:31:34.826 [2024-12-05 13:02:34.485887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.485954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.485963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:34.826 [2024-12-05 13:02:34.485976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:34.826 [2024-12-05 13:02:34.485991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.486107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.486120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:34.826 [2024-12-05 13:02:34.486129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:31:34.826 [2024-12-05 13:02:34.486136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.486180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.486189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:34.826 [2024-12-05 13:02:34.486204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:34.826 [2024-12-05 13:02:34.486214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.493427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.493475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:34.826 [2024-12-05 13:02:34.493488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.192 ms 00:31:34.826 [2024-12-05 13:02:34.493497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.493625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.493637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:34.826 [2024-12-05 13:02:34.493649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:34.826 [2024-12-05 13:02:34.493658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.511523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.511599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:34.826 [2024-12-05 13:02:34.511618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.842 ms 00:31:34.826 [2024-12-05 13:02:34.511629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.513621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.513668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:34.826 [2024-12-05 13:02:34.513687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:31:34.826 [2024-12-05 13:02:34.513697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.532602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.532679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:34.826 [2024-12-05 13:02:34.532700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.834 ms 00:31:34.826 [2024-12-05 13:02:34.532710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.532897] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:34.826 [2024-12-05 13:02:34.533005] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:34.826 [2024-12-05 13:02:34.533118] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:34.826 [2024-12-05 13:02:34.533222] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:34.826 [2024-12-05 13:02:34.533232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.533241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:34.826 [2024-12-05 13:02:34.533261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.451 ms 00:31:34.826 [2024-12-05 13:02:34.533271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.533355] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:34.826 [2024-12-05 13:02:34.533369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.533377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:34.826 [2024-12-05 13:02:34.533387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:34.826 [2024-12-05 13:02:34.533396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.536604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.536652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:34.826 [2024-12-05 13:02:34.536664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.183 ms 00:31:34.826 [2024-12-05 13:02:34.536678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.537454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.537485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:34.826 [2024-12-05 13:02:34.537496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:34.826 [2024-12-05 13:02:34.537504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:34.826 [2024-12-05 13:02:34.537587] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:34.826 [2024-12-05 13:02:34.537761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:34.826 [2024-12-05 13:02:34.537782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:34.826 [2024-12-05 13:02:34.537823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:31:34.826 [2024-12-05 13:02:34.537835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.393 [2024-12-05 13:02:34.958233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.393 [2024-12-05 13:02:34.958321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:35.393 [2024-12-05 13:02:34.958338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 420.067 ms 00:31:35.393 [2024-12-05 13:02:34.958347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.393 [2024-12-05 13:02:34.959698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.393 [2024-12-05 13:02:34.959745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:35.393 [2024-12-05 13:02:34.959769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:31:35.393 [2024-12-05 13:02:34.959777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.393 [2024-12-05 13:02:34.960150] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:35.393 [2024-12-05 13:02:34.960184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.393 [2024-12-05 13:02:34.960200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:35.393 [2024-12-05 13:02:34.960217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.383 ms 00:31:35.393 [2024-12-05 13:02:34.960225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.393 [2024-12-05 13:02:34.960259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.393 [2024-12-05 13:02:34.960272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:35.393 [2024-12-05 13:02:34.960280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:35.393 [2024-12-05 13:02:34.960288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.393 [2024-12-05 13:02:34.960326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 422.737 ms, result 0 00:31:35.393 [2024-12-05 13:02:34.960371] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:35.393 [2024-12-05 13:02:34.960491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.394 [2024-12-05 13:02:34.960501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:35.394 [2024-12-05 13:02:34.960509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.122 ms 00:31:35.394 [2024-12-05 13:02:34.960516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.384479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.384573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:35.651 [2024-12-05 13:02:35.384593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 423.430 ms 00:31:35.651 [2024-12-05 13:02:35.384606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.385964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.386002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:35.651 [2024-12-05 13:02:35.386017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.790 ms 00:31:35.651 [2024-12-05 13:02:35.386028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.386428] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:35.651 [2024-12-05 13:02:35.386464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.386476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:35.651 [2024-12-05 13:02:35.386488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:31:35.651 [2024-12-05 13:02:35.386498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.386535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.386547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:35.651 [2024-12-05 13:02:35.386558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:35.651 [2024-12-05 13:02:35.386568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.386616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 426.234 ms, result 0 00:31:35.651 [2024-12-05 13:02:35.386674] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:35.651 [2024-12-05 13:02:35.386688] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:35.651 [2024-12-05 13:02:35.386701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.386712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:35.651 [2024-12-05 13:02:35.386722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 849.135 ms 00:31:35.651 [2024-12-05 13:02:35.386738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.386782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.386794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:35.651 [2024-12-05 13:02:35.386827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:35.651 [2024-12-05 13:02:35.386837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.397134] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:35.651 [2024-12-05 13:02:35.397293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.397319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:35.651 [2024-12-05 13:02:35.397332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.434 ms 00:31:35.651 [2024-12-05 13:02:35.397343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.398064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.398090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:35.651 [2024-12-05 13:02:35.398098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.586 ms 00:31:35.651 [2024-12-05 13:02:35.398104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.399812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.399831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:35.651 [2024-12-05 13:02:35.399839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.678 ms 00:31:35.651 [2024-12-05 13:02:35.399846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.399884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.399891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:35.651 [2024-12-05 13:02:35.399899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:35.651 [2024-12-05 13:02:35.399905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.400000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.400013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:35.651 [2024-12-05 13:02:35.400022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:35.651 [2024-12-05 13:02:35.400028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.400047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.400054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:35.651 [2024-12-05 13:02:35.400060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:35.651 [2024-12-05 13:02:35.400070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.400098] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:35.651 [2024-12-05 13:02:35.400106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.400113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:35.651 [2024-12-05 13:02:35.400120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:35.651 [2024-12-05 13:02:35.400128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.400171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.651 [2024-12-05 13:02:35.400190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:35.651 [2024-12-05 13:02:35.400197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:35.651 [2024-12-05 13:02:35.400203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.651 [2024-12-05 13:02:35.401141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 945.981 ms, result 0 00:31:35.651 [2024-12-05 13:02:35.413658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.651 [2024-12-05 13:02:35.429670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:35.651 [2024-12-05 13:02:35.437772] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:35.651 Validate MD5 checksum, iteration 1 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:35.651 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:35.652 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:35.652 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:35.652 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:35.652 13:02:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:35.909 [2024-12-05 13:02:35.535320] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:35.909 [2024-12-05 13:02:35.535479] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92105 ] 00:31:35.909 [2024-12-05 13:02:35.691480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.909 [2024-12-05 13:02:35.716007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.355  [2024-12-05T13:02:37.773Z] Copying: 631/1024 [MB] (631 MBps) [2024-12-05T13:02:38.732Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:31:38.880 00:31:38.880 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:38.880 13:02:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:41.406 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:41.406 Validate MD5 checksum, iteration 2 00:31:41.406 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=72b2481a53f126e0a3b5fc4068d662f1 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 72b2481a53f126e0a3b5fc4068d662f1 != \7\2\b\2\4\8\1\a\5\3\f\1\2\6\e\0\a\3\b\5\f\c\4\0\6\8\d\6\6\2\f\1 ]] 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:41.407 13:02:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:41.407 [2024-12-05 13:02:40.819597] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:41.407 [2024-12-05 13:02:40.819724] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92165 ] 00:31:41.407 [2024-12-05 13:02:40.977434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.407 [2024-12-05 13:02:41.006876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.899  [2024-12-05T13:02:43.009Z] Copying: 647/1024 [MB] (647 MBps) [2024-12-05T13:02:43.572Z] Copying: 1024/1024 [MB] (average 644 MBps) 00:31:43.720 00:31:43.720 13:02:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:43.720 13:02:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a370dcd3d0e303635b1a1d547d836098 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a370dcd3d0e303635b1a1d547d836098 != \a\3\7\0\d\c\d\3\d\0\e\3\0\3\6\3\5\b\1\a\1\d\5\4\7\d\8\3\6\0\9\8 ]] 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92083 ]] 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92083 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 92083 ']' 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 92083 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92083 00:31:46.245 killing process with pid 92083 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92083' 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 92083 00:31:46.245 13:02:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 92083 00:31:46.245 [2024-12-05 13:02:45.803145] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:46.245 [2024-12-05 13:02:45.807232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.807268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:46.245 [2024-12-05 13:02:45.807280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:46.245 [2024-12-05 13:02:45.807287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.807307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:46.245 [2024-12-05 13:02:45.807849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.807870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:46.245 [2024-12-05 13:02:45.807878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 00:31:46.245 [2024-12-05 13:02:45.807889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.808093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.808103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:46.245 [2024-12-05 13:02:45.808111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:31:46.245 [2024-12-05 13:02:45.808118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.809171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.809191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:46.245 [2024-12-05 13:02:45.809200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:31:46.245 [2024-12-05 13:02:45.809207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.810192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.810209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:46.245 [2024-12-05 13:02:45.810216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.958 ms 00:31:46.245 [2024-12-05 13:02:45.810224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.811629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.811658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:46.245 [2024-12-05 13:02:45.811666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.375 ms 00:31:46.245 [2024-12-05 13:02:45.811677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.812802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.812833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:46.245 [2024-12-05 13:02:45.812841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:31:46.245 [2024-12-05 13:02:45.812847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.812928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.812937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:46.245 [2024-12-05 13:02:45.812943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:46.245 [2024-12-05 13:02:45.812950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.814032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.814054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:46.245 [2024-12-05 13:02:45.814061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.064 ms 00:31:46.245 [2024-12-05 13:02:45.814067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.815048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.815072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:46.245 [2024-12-05 13:02:45.815079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.956 ms 00:31:46.245 [2024-12-05 13:02:45.815084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.815921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.815942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:46.245 [2024-12-05 13:02:45.815950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.810 ms 00:31:46.245 [2024-12-05 13:02:45.815957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.816799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.245 [2024-12-05 13:02:45.816832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:46.245 [2024-12-05 13:02:45.816839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.789 ms 00:31:46.245 [2024-12-05 13:02:45.816845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.245 [2024-12-05 13:02:45.816870] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:46.245 [2024-12-05 13:02:45.816888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:46.245 [2024-12-05 13:02:45.816897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:46.245 [2024-12-05 13:02:45.816904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:46.245 [2024-12-05 13:02:45.816910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:46.245 [2024-12-05 13:02:45.816917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:46.245 [2024-12-05 13:02:45.816923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:46.245 [2024-12-05 13:02:45.816929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:46.245 [2024-12-05 13:02:45.816935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:46.245 [2024-12-05 13:02:45.816942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:46.245 [2024-12-05 13:02:45.816948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.816954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.816960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.816966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.816973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.816979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.816985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.816991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.816997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:46.246 [2024-12-05 13:02:45.817005] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:46.246 [2024-12-05 13:02:45.817011] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 57102aeb-cbe4-425d-bbdd-2f99276d7012 00:31:46.246 [2024-12-05 13:02:45.817018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:46.246 [2024-12-05 13:02:45.817024] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:46.246 [2024-12-05 13:02:45.817029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:46.246 [2024-12-05 13:02:45.817036] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:46.246 [2024-12-05 13:02:45.817041] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:46.246 [2024-12-05 13:02:45.817048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:46.246 [2024-12-05 13:02:45.817054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:46.246 [2024-12-05 13:02:45.817059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:46.246 [2024-12-05 13:02:45.817066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:46.246 [2024-12-05 13:02:45.817072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.246 [2024-12-05 13:02:45.817078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:46.246 [2024-12-05 13:02:45.817086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:31:46.246 [2024-12-05 13:02:45.817093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.818782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.246 [2024-12-05 13:02:45.818804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:46.246 [2024-12-05 13:02:45.818841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.675 ms 00:31:46.246 [2024-12-05 13:02:45.818847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.818939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.246 [2024-12-05 13:02:45.818951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:46.246 [2024-12-05 13:02:45.818958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:31:46.246 [2024-12-05 13:02:45.818965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.824987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.825012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:46.246 [2024-12-05 13:02:45.825021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.825028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.825057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.825065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:46.246 [2024-12-05 13:02:45.825071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.825078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.825142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.825151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:46.246 [2024-12-05 13:02:45.825157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.825170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.825185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.825194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:46.246 [2024-12-05 13:02:45.825201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.825207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.836371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.836410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:46.246 [2024-12-05 13:02:45.836419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.836426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.844914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.844958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:46.246 [2024-12-05 13:02:45.844967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.844974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.845044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.845053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:46.246 [2024-12-05 13:02:45.845060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.845066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.845095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.845107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:46.246 [2024-12-05 13:02:45.845116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.845122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.845183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.845193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:46.246 [2024-12-05 13:02:45.845199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.845206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.845231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.845240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:46.246 [2024-12-05 13:02:45.845246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.845256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.845291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.845299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:46.246 [2024-12-05 13:02:45.845305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.845311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.845350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:46.246 [2024-12-05 13:02:45.845359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:46.246 [2024-12-05 13:02:45.845365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:46.246 [2024-12-05 13:02:45.845373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.246 [2024-12-05 13:02:45.845484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 38.225 ms, result 0 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:46.246 Remove shared memory files 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid91876 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:46.246 00:31:46.246 real 1m16.678s 00:31:46.246 user 1m41.860s 00:31:46.246 sys 0m19.807s 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.246 13:02:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:46.246 ************************************ 00:31:46.246 END TEST ftl_upgrade_shutdown 00:31:46.246 ************************************ 00:31:46.246 13:02:46 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:46.246 13:02:46 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:46.246 13:02:46 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:46.246 13:02:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.246 13:02:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:46.246 ************************************ 00:31:46.246 START TEST ftl_restore_fast 00:31:46.246 ************************************ 00:31:46.246 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:46.504 * Looking for test storage... 00:31:46.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:46.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.504 --rc genhtml_branch_coverage=1 00:31:46.504 --rc genhtml_function_coverage=1 00:31:46.504 --rc genhtml_legend=1 00:31:46.504 --rc geninfo_all_blocks=1 00:31:46.504 --rc geninfo_unexecuted_blocks=1 00:31:46.504 00:31:46.504 ' 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:46.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.504 --rc genhtml_branch_coverage=1 00:31:46.504 --rc genhtml_function_coverage=1 00:31:46.504 --rc genhtml_legend=1 00:31:46.504 --rc geninfo_all_blocks=1 00:31:46.504 --rc geninfo_unexecuted_blocks=1 00:31:46.504 00:31:46.504 ' 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:46.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.504 --rc genhtml_branch_coverage=1 00:31:46.504 --rc genhtml_function_coverage=1 00:31:46.504 --rc genhtml_legend=1 00:31:46.504 --rc geninfo_all_blocks=1 00:31:46.504 --rc geninfo_unexecuted_blocks=1 00:31:46.504 00:31:46.504 ' 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:46.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.504 --rc genhtml_branch_coverage=1 00:31:46.504 --rc genhtml_function_coverage=1 00:31:46.504 --rc genhtml_legend=1 00:31:46.504 --rc geninfo_all_blocks=1 00:31:46.504 --rc geninfo_unexecuted_blocks=1 00:31:46.504 00:31:46.504 ' 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:46.504 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.RbQM0wRPBQ 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92294 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92294 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 92294 ']' 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.505 13:02:46 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:46.505 [2024-12-05 13:02:46.341451] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:31:46.505 [2024-12-05 13:02:46.341955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92294 ] 00:31:46.762 [2024-12-05 13:02:46.501500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.762 [2024-12-05 13:02:46.526612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.328 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.328 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:47.328 13:02:47 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:47.328 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:47.328 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:47.328 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:47.328 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:47.328 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:47.894 { 00:31:47.894 "name": "nvme0n1", 00:31:47.894 "aliases": [ 00:31:47.894 "19fadfc0-5e57-40af-9f6f-fdf4a15669cc" 00:31:47.894 ], 00:31:47.894 "product_name": "NVMe disk", 00:31:47.894 "block_size": 4096, 00:31:47.894 "num_blocks": 1310720, 00:31:47.894 "uuid": "19fadfc0-5e57-40af-9f6f-fdf4a15669cc", 00:31:47.894 "numa_id": -1, 00:31:47.894 "assigned_rate_limits": { 00:31:47.894 "rw_ios_per_sec": 0, 00:31:47.894 "rw_mbytes_per_sec": 0, 00:31:47.894 "r_mbytes_per_sec": 0, 00:31:47.894 "w_mbytes_per_sec": 0 00:31:47.894 }, 00:31:47.894 "claimed": true, 00:31:47.894 "claim_type": "read_many_write_one", 00:31:47.894 "zoned": false, 00:31:47.894 "supported_io_types": { 00:31:47.894 "read": true, 00:31:47.894 "write": true, 00:31:47.894 "unmap": true, 00:31:47.894 "flush": true, 00:31:47.894 "reset": true, 00:31:47.894 "nvme_admin": true, 00:31:47.894 "nvme_io": true, 00:31:47.894 "nvme_io_md": false, 00:31:47.894 "write_zeroes": true, 00:31:47.894 "zcopy": false, 00:31:47.894 "get_zone_info": false, 00:31:47.894 "zone_management": false, 00:31:47.894 "zone_append": false, 00:31:47.894 "compare": true, 00:31:47.894 "compare_and_write": false, 00:31:47.894 "abort": true, 00:31:47.894 "seek_hole": false, 00:31:47.894 "seek_data": false, 00:31:47.894 "copy": true, 00:31:47.894 "nvme_iov_md": false 00:31:47.894 }, 00:31:47.894 "driver_specific": { 00:31:47.894 "nvme": [ 00:31:47.894 { 00:31:47.894 "pci_address": "0000:00:11.0", 00:31:47.894 "trid": { 00:31:47.894 "trtype": "PCIe", 00:31:47.894 "traddr": "0000:00:11.0" 00:31:47.894 }, 00:31:47.894 "ctrlr_data": { 00:31:47.894 "cntlid": 0, 00:31:47.894 "vendor_id": "0x1b36", 00:31:47.894 "model_number": "QEMU NVMe Ctrl", 00:31:47.894 "serial_number": "12341", 00:31:47.894 "firmware_revision": "8.0.0", 00:31:47.894 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:47.894 "oacs": { 00:31:47.894 "security": 0, 00:31:47.894 "format": 1, 00:31:47.894 "firmware": 0, 00:31:47.894 "ns_manage": 1 00:31:47.894 }, 00:31:47.894 "multi_ctrlr": false, 00:31:47.894 "ana_reporting": false 00:31:47.894 }, 00:31:47.894 "vs": { 00:31:47.894 "nvme_version": "1.4" 00:31:47.894 }, 00:31:47.894 "ns_data": { 00:31:47.894 "id": 1, 00:31:47.894 "can_share": false 00:31:47.894 } 00:31:47.894 } 00:31:47.894 ], 00:31:47.894 "mp_policy": "active_passive" 00:31:47.894 } 00:31:47.894 } 00:31:47.894 ]' 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:47.894 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=82ea626c-4a05-4a34-8e18-91446a073e65 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:48.151 13:02:47 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82ea626c-4a05-4a34-8e18-91446a073e65 00:31:48.409 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:48.667 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=f3c26d7e-6183-4dfd-8dfc-61c420183d41 00:31:48.667 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f3c26d7e-6183-4dfd-8dfc-61c420183d41 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:48.925 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:49.183 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:49.183 { 00:31:49.183 "name": "e180d24b-1439-401f-a60c-a23b1622bbd8", 00:31:49.183 "aliases": [ 00:31:49.183 "lvs/nvme0n1p0" 00:31:49.183 ], 00:31:49.183 "product_name": "Logical Volume", 00:31:49.183 "block_size": 4096, 00:31:49.183 "num_blocks": 26476544, 00:31:49.183 "uuid": "e180d24b-1439-401f-a60c-a23b1622bbd8", 00:31:49.183 "assigned_rate_limits": { 00:31:49.183 "rw_ios_per_sec": 0, 00:31:49.183 "rw_mbytes_per_sec": 0, 00:31:49.183 "r_mbytes_per_sec": 0, 00:31:49.183 "w_mbytes_per_sec": 0 00:31:49.183 }, 00:31:49.183 "claimed": false, 00:31:49.183 "zoned": false, 00:31:49.183 "supported_io_types": { 00:31:49.183 "read": true, 00:31:49.183 "write": true, 00:31:49.183 "unmap": true, 00:31:49.183 "flush": false, 00:31:49.183 "reset": true, 00:31:49.183 "nvme_admin": false, 00:31:49.183 "nvme_io": false, 00:31:49.183 "nvme_io_md": false, 00:31:49.183 "write_zeroes": true, 00:31:49.183 "zcopy": false, 00:31:49.183 "get_zone_info": false, 00:31:49.183 "zone_management": false, 00:31:49.183 "zone_append": false, 00:31:49.183 "compare": false, 00:31:49.183 "compare_and_write": false, 00:31:49.183 "abort": false, 00:31:49.183 "seek_hole": true, 00:31:49.183 "seek_data": true, 00:31:49.183 "copy": false, 00:31:49.183 "nvme_iov_md": false 00:31:49.183 }, 00:31:49.183 "driver_specific": { 00:31:49.183 "lvol": { 00:31:49.183 "lvol_store_uuid": "f3c26d7e-6183-4dfd-8dfc-61c420183d41", 00:31:49.183 "base_bdev": "nvme0n1", 00:31:49.183 "thin_provision": true, 00:31:49.183 "num_allocated_clusters": 0, 00:31:49.183 "snapshot": false, 00:31:49.183 "clone": false, 00:31:49.183 "esnap_clone": false 00:31:49.183 } 00:31:49.183 } 00:31:49.183 } 00:31:49.183 ]' 00:31:49.183 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:49.183 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:49.183 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:49.183 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:49.183 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:49.183 13:02:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:49.183 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:49.184 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:49.184 13:02:48 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:49.441 13:02:49 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:49.441 13:02:49 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:49.441 13:02:49 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:49.441 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:49.441 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:49.441 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:49.441 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:49.441 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:49.698 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:49.698 { 00:31:49.698 "name": "e180d24b-1439-401f-a60c-a23b1622bbd8", 00:31:49.698 "aliases": [ 00:31:49.698 "lvs/nvme0n1p0" 00:31:49.698 ], 00:31:49.698 "product_name": "Logical Volume", 00:31:49.698 "block_size": 4096, 00:31:49.698 "num_blocks": 26476544, 00:31:49.698 "uuid": "e180d24b-1439-401f-a60c-a23b1622bbd8", 00:31:49.698 "assigned_rate_limits": { 00:31:49.698 "rw_ios_per_sec": 0, 00:31:49.698 "rw_mbytes_per_sec": 0, 00:31:49.698 "r_mbytes_per_sec": 0, 00:31:49.698 "w_mbytes_per_sec": 0 00:31:49.698 }, 00:31:49.698 "claimed": false, 00:31:49.698 "zoned": false, 00:31:49.698 "supported_io_types": { 00:31:49.698 "read": true, 00:31:49.698 "write": true, 00:31:49.698 "unmap": true, 00:31:49.698 "flush": false, 00:31:49.698 "reset": true, 00:31:49.698 "nvme_admin": false, 00:31:49.698 "nvme_io": false, 00:31:49.698 "nvme_io_md": false, 00:31:49.698 "write_zeroes": true, 00:31:49.698 "zcopy": false, 00:31:49.698 "get_zone_info": false, 00:31:49.698 "zone_management": false, 00:31:49.698 "zone_append": false, 00:31:49.698 "compare": false, 00:31:49.698 "compare_and_write": false, 00:31:49.698 "abort": false, 00:31:49.698 "seek_hole": true, 00:31:49.698 "seek_data": true, 00:31:49.698 "copy": false, 00:31:49.698 "nvme_iov_md": false 00:31:49.698 }, 00:31:49.698 "driver_specific": { 00:31:49.698 "lvol": { 00:31:49.698 "lvol_store_uuid": "f3c26d7e-6183-4dfd-8dfc-61c420183d41", 00:31:49.699 "base_bdev": "nvme0n1", 00:31:49.699 "thin_provision": true, 00:31:49.699 "num_allocated_clusters": 0, 00:31:49.699 "snapshot": false, 00:31:49.699 "clone": false, 00:31:49.699 "esnap_clone": false 00:31:49.699 } 00:31:49.699 } 00:31:49.699 } 00:31:49.699 ]' 00:31:49.699 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:49.699 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:49.699 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:49.699 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:49.699 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:49.699 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:49.699 13:02:49 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:49.699 13:02:49 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:49.956 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:49.956 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:49.956 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:49.956 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:49.956 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:49.956 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:49.956 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e180d24b-1439-401f-a60c-a23b1622bbd8 00:31:50.213 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:50.213 { 00:31:50.213 "name": "e180d24b-1439-401f-a60c-a23b1622bbd8", 00:31:50.213 "aliases": [ 00:31:50.213 "lvs/nvme0n1p0" 00:31:50.213 ], 00:31:50.213 "product_name": "Logical Volume", 00:31:50.213 "block_size": 4096, 00:31:50.213 "num_blocks": 26476544, 00:31:50.213 "uuid": "e180d24b-1439-401f-a60c-a23b1622bbd8", 00:31:50.213 "assigned_rate_limits": { 00:31:50.213 "rw_ios_per_sec": 0, 00:31:50.213 "rw_mbytes_per_sec": 0, 00:31:50.213 "r_mbytes_per_sec": 0, 00:31:50.213 "w_mbytes_per_sec": 0 00:31:50.213 }, 00:31:50.213 "claimed": false, 00:31:50.213 "zoned": false, 00:31:50.213 "supported_io_types": { 00:31:50.213 "read": true, 00:31:50.213 "write": true, 00:31:50.213 "unmap": true, 00:31:50.213 "flush": false, 00:31:50.213 "reset": true, 00:31:50.213 "nvme_admin": false, 00:31:50.213 "nvme_io": false, 00:31:50.213 "nvme_io_md": false, 00:31:50.213 "write_zeroes": true, 00:31:50.213 "zcopy": false, 00:31:50.213 "get_zone_info": false, 00:31:50.213 "zone_management": false, 00:31:50.213 "zone_append": false, 00:31:50.213 "compare": false, 00:31:50.214 "compare_and_write": false, 00:31:50.214 "abort": false, 00:31:50.214 "seek_hole": true, 00:31:50.214 "seek_data": true, 00:31:50.214 "copy": false, 00:31:50.214 "nvme_iov_md": false 00:31:50.214 }, 00:31:50.214 "driver_specific": { 00:31:50.214 "lvol": { 00:31:50.214 "lvol_store_uuid": "f3c26d7e-6183-4dfd-8dfc-61c420183d41", 00:31:50.214 "base_bdev": "nvme0n1", 00:31:50.214 "thin_provision": true, 00:31:50.214 "num_allocated_clusters": 0, 00:31:50.214 "snapshot": false, 00:31:50.214 "clone": false, 00:31:50.214 "esnap_clone": false 00:31:50.214 } 00:31:50.214 } 00:31:50.214 } 00:31:50.214 ]' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e180d24b-1439-401f-a60c-a23b1622bbd8 --l2p_dram_limit 10' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:50.214 13:02:49 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e180d24b-1439-401f-a60c-a23b1622bbd8 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:50.471 [2024-12-05 13:02:50.081940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.471 [2024-12-05 13:02:50.081997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:50.471 [2024-12-05 13:02:50.082010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:50.472 [2024-12-05 13:02:50.082019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.082079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.082091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:50.472 [2024-12-05 13:02:50.082098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:50.472 [2024-12-05 13:02:50.082109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.082129] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:50.472 [2024-12-05 13:02:50.082383] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:50.472 [2024-12-05 13:02:50.082394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.082403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:50.472 [2024-12-05 13:02:50.082410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:31:50.472 [2024-12-05 13:02:50.082418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.082474] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9d3bffc2-01da-4dfa-8846-a146b1722d32 00:31:50.472 [2024-12-05 13:02:50.083795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.083835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:50.472 [2024-12-05 13:02:50.083845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:50.472 [2024-12-05 13:02:50.083851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.090767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.090798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:50.472 [2024-12-05 13:02:50.090820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.876 ms 00:31:50.472 [2024-12-05 13:02:50.090831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.090906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.090913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:50.472 [2024-12-05 13:02:50.090921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:50.472 [2024-12-05 13:02:50.090928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.090974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.090982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:50.472 [2024-12-05 13:02:50.090990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:50.472 [2024-12-05 13:02:50.090996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.091018] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:50.472 [2024-12-05 13:02:50.092673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.092704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:50.472 [2024-12-05 13:02:50.092715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:31:50.472 [2024-12-05 13:02:50.092723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.092756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.092765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:50.472 [2024-12-05 13:02:50.092771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:50.472 [2024-12-05 13:02:50.092784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.092803] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:50.472 [2024-12-05 13:02:50.092959] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:50.472 [2024-12-05 13:02:50.092969] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:50.472 [2024-12-05 13:02:50.092981] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:50.472 [2024-12-05 13:02:50.092992] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093003] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093009] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:50.472 [2024-12-05 13:02:50.093019] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:50.472 [2024-12-05 13:02:50.093024] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:50.472 [2024-12-05 13:02:50.093031] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:50.472 [2024-12-05 13:02:50.093037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.093048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:50.472 [2024-12-05 13:02:50.093054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:31:50.472 [2024-12-05 13:02:50.093062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.093130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.472 [2024-12-05 13:02:50.093140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:50.472 [2024-12-05 13:02:50.093146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:50.472 [2024-12-05 13:02:50.093155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.472 [2024-12-05 13:02:50.093231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:50.472 [2024-12-05 13:02:50.093240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:50.472 [2024-12-05 13:02:50.093246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:50.472 [2024-12-05 13:02:50.093267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:50.472 [2024-12-05 13:02:50.093284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:50.472 [2024-12-05 13:02:50.093296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:50.472 [2024-12-05 13:02:50.093302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:50.472 [2024-12-05 13:02:50.093307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:50.472 [2024-12-05 13:02:50.093316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:50.472 [2024-12-05 13:02:50.093321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:50.472 [2024-12-05 13:02:50.093329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:50.472 [2024-12-05 13:02:50.093343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:50.472 [2024-12-05 13:02:50.093362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:50.472 [2024-12-05 13:02:50.093382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:50.472 [2024-12-05 13:02:50.093401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:50.472 [2024-12-05 13:02:50.093421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:50.472 [2024-12-05 13:02:50.093439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:50.472 [2024-12-05 13:02:50.093450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:50.472 [2024-12-05 13:02:50.093457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:50.472 [2024-12-05 13:02:50.093462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:50.472 [2024-12-05 13:02:50.093469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:50.472 [2024-12-05 13:02:50.093474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:50.472 [2024-12-05 13:02:50.093480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:50.472 [2024-12-05 13:02:50.093493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:50.472 [2024-12-05 13:02:50.093498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093504] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:50.472 [2024-12-05 13:02:50.093510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:50.472 [2024-12-05 13:02:50.093519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:50.472 [2024-12-05 13:02:50.093525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:50.472 [2024-12-05 13:02:50.093535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:50.472 [2024-12-05 13:02:50.093541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:50.473 [2024-12-05 13:02:50.093549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:50.473 [2024-12-05 13:02:50.093556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:50.473 [2024-12-05 13:02:50.093563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:50.473 [2024-12-05 13:02:50.093568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:50.473 [2024-12-05 13:02:50.093577] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:50.473 [2024-12-05 13:02:50.093586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:50.473 [2024-12-05 13:02:50.093595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:50.473 [2024-12-05 13:02:50.093601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:50.473 [2024-12-05 13:02:50.093608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:50.473 [2024-12-05 13:02:50.093614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:50.473 [2024-12-05 13:02:50.093621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:50.473 [2024-12-05 13:02:50.093627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:50.473 [2024-12-05 13:02:50.093636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:50.473 [2024-12-05 13:02:50.093641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:50.473 [2024-12-05 13:02:50.093649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:50.473 [2024-12-05 13:02:50.093655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:50.473 [2024-12-05 13:02:50.093661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:50.473 [2024-12-05 13:02:50.093667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:50.473 [2024-12-05 13:02:50.093674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:50.473 [2024-12-05 13:02:50.093680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:50.473 [2024-12-05 13:02:50.093687] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:50.473 [2024-12-05 13:02:50.093694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:50.473 [2024-12-05 13:02:50.093702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:50.473 [2024-12-05 13:02:50.093708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:50.473 [2024-12-05 13:02:50.093715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:50.473 [2024-12-05 13:02:50.093721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:50.473 [2024-12-05 13:02:50.093729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:50.473 [2024-12-05 13:02:50.093737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:50.473 [2024-12-05 13:02:50.093747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:31:50.473 [2024-12-05 13:02:50.093752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.473 [2024-12-05 13:02:50.093783] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:50.473 [2024-12-05 13:02:50.093790] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:52.999 [2024-12-05 13:02:52.414059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.999 [2024-12-05 13:02:52.414126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:52.999 [2024-12-05 13:02:52.414158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2320.258 ms 00:31:52.999 [2024-12-05 13:02:52.414168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.999 [2024-12-05 13:02:52.425236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.999 [2024-12-05 13:02:52.425287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:52.999 [2024-12-05 13:02:52.425304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.921 ms 00:31:52.999 [2024-12-05 13:02:52.425313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.999 [2024-12-05 13:02:52.425435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.999 [2024-12-05 13:02:52.425445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:52.999 [2024-12-05 13:02:52.425456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:52.999 [2024-12-05 13:02:52.425464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.999 [2024-12-05 13:02:52.436037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.999 [2024-12-05 13:02:52.436206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:52.999 [2024-12-05 13:02:52.436228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.502 ms 00:31:52.999 [2024-12-05 13:02:52.436240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.999 [2024-12-05 13:02:52.436279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.999 [2024-12-05 13:02:52.436288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:52.999 [2024-12-05 13:02:52.436303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:52.999 [2024-12-05 13:02:52.436311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.999 [2024-12-05 13:02:52.436742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.999 [2024-12-05 13:02:52.436759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:52.999 [2024-12-05 13:02:52.436771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:31:52.999 [2024-12-05 13:02:52.436780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.999 [2024-12-05 13:02:52.436934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.999 [2024-12-05 13:02:52.436946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:52.999 [2024-12-05 13:02:52.436958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:31:52.999 [2024-12-05 13:02:52.436967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.999 [2024-12-05 13:02:52.443674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.999 [2024-12-05 13:02:52.443791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:52.999 [2024-12-05 13:02:52.443819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.680 ms 00:31:52.999 [2024-12-05 13:02:52.443828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.460518] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:53.000 [2024-12-05 13:02:52.463942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.463977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:53.000 [2024-12-05 13:02:52.463990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.050 ms 00:31:53.000 [2024-12-05 13:02:52.464000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.508918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.508991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:53.000 [2024-12-05 13:02:52.509016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.876 ms 00:31:53.000 [2024-12-05 13:02:52.509036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.509228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.509243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:53.000 [2024-12-05 13:02:52.509253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:31:53.000 [2024-12-05 13:02:52.509262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.512424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.512469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:53.000 [2024-12-05 13:02:52.512483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:31:53.000 [2024-12-05 13:02:52.512494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.516734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.516849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:53.000 [2024-12-05 13:02:52.516876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.214 ms 00:31:53.000 [2024-12-05 13:02:52.516897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.517558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.517609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:53.000 [2024-12-05 13:02:52.517632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:31:53.000 [2024-12-05 13:02:52.517659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.544344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.544396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:53.000 [2024-12-05 13:02:52.544412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.642 ms 00:31:53.000 [2024-12-05 13:02:52.544423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.548938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.548986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:53.000 [2024-12-05 13:02:52.549000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.480 ms 00:31:53.000 [2024-12-05 13:02:52.549012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.552101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.552149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:53.000 [2024-12-05 13:02:52.552160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:31:53.000 [2024-12-05 13:02:52.552170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.555351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.555522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:53.000 [2024-12-05 13:02:52.555546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:31:53.000 [2024-12-05 13:02:52.555566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.555604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.555637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:53.000 [2024-12-05 13:02:52.555652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:53.000 [2024-12-05 13:02:52.555667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.555783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.000 [2024-12-05 13:02:52.555832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:53.000 [2024-12-05 13:02:52.555854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:53.000 [2024-12-05 13:02:52.555876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.000 [2024-12-05 13:02:52.556971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2474.580 ms, result 0 00:31:53.000 { 00:31:53.000 "name": "ftl0", 00:31:53.000 "uuid": "9d3bffc2-01da-4dfa-8846-a146b1722d32" 00:31:53.000 } 00:31:53.000 13:02:52 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:53.000 13:02:52 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:53.000 13:02:52 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:53.000 13:02:52 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:53.268 [2024-12-05 13:02:52.980350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.268 [2024-12-05 13:02:52.980613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:53.268 [2024-12-05 13:02:52.981004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:53.268 [2024-12-05 13:02:52.981150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.268 [2024-12-05 13:02:52.981228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:53.268 [2024-12-05 13:02:52.982257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.268 [2024-12-05 13:02:52.982365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:53.268 [2024-12-05 13:02:52.982420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:31:53.268 [2024-12-05 13:02:52.982448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.268 [2024-12-05 13:02:52.983222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.268 [2024-12-05 13:02:52.983429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:53.268 [2024-12-05 13:02:52.983608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:31:53.268 [2024-12-05 13:02:52.983788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.268 [2024-12-05 13:02:52.993656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.268 [2024-12-05 13:02:52.993951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:53.268 [2024-12-05 13:02:52.993992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.659 ms 00:31:53.268 [2024-12-05 13:02:52.994017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.268 [2024-12-05 13:02:53.002975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.268 [2024-12-05 13:02:53.003098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:53.268 [2024-12-05 13:02:53.003150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.903 ms 00:31:53.268 [2024-12-05 13:02:53.003206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.268 [2024-12-05 13:02:53.004936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.268 [2024-12-05 13:02:53.005057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:53.268 [2024-12-05 13:02:53.005110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:31:53.268 [2024-12-05 13:02:53.005158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.268 [2024-12-05 13:02:53.008790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.268 [2024-12-05 13:02:53.008924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:53.268 [2024-12-05 13:02:53.008977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.539 ms 00:31:53.268 [2024-12-05 13:02:53.009002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.269 [2024-12-05 13:02:53.009140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.269 [2024-12-05 13:02:53.009167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:53.269 [2024-12-05 13:02:53.009194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:31:53.269 [2024-12-05 13:02:53.009243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.269 [2024-12-05 13:02:53.010987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.269 [2024-12-05 13:02:53.011088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:53.269 [2024-12-05 13:02:53.011138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:31:53.269 [2024-12-05 13:02:53.011190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.269 [2024-12-05 13:02:53.012267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.269 [2024-12-05 13:02:53.012369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:53.269 [2024-12-05 13:02:53.012420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:31:53.269 [2024-12-05 13:02:53.012467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.269 [2024-12-05 13:02:53.013482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.269 [2024-12-05 13:02:53.013580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:53.269 [2024-12-05 13:02:53.013631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:31:53.269 [2024-12-05 13:02:53.013678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.269 [2024-12-05 13:02:53.014652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.269 [2024-12-05 13:02:53.014748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:53.269 [2024-12-05 13:02:53.014797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:31:53.269 [2024-12-05 13:02:53.014857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.269 [2024-12-05 13:02:53.014901] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:53.269 [2024-12-05 13:02:53.014963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.014998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.015982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.016974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.017040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.017073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.017104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.017155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.017203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.017233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.017263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:53.269 [2024-12-05 13:02:53.017334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.017983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:53.270 [2024-12-05 13:02:53.018938] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:53.270 [2024-12-05 13:02:53.018948] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d3bffc2-01da-4dfa-8846-a146b1722d32 00:31:53.270 [2024-12-05 13:02:53.018959] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:53.270 [2024-12-05 13:02:53.018966] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:53.270 [2024-12-05 13:02:53.018975] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:53.270 [2024-12-05 13:02:53.018983] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:53.270 [2024-12-05 13:02:53.018993] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:53.271 [2024-12-05 13:02:53.019004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:53.271 [2024-12-05 13:02:53.019015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:53.271 [2024-12-05 13:02:53.019022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:53.271 [2024-12-05 13:02:53.019030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:53.271 [2024-12-05 13:02:53.019037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.271 [2024-12-05 13:02:53.019047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:53.271 [2024-12-05 13:02:53.019056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.137 ms 00:31:53.271 [2024-12-05 13:02:53.019065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.020927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.271 [2024-12-05 13:02:53.020964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:53.271 [2024-12-05 13:02:53.020976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.833 ms 00:31:53.271 [2024-12-05 13:02:53.020987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.021110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.271 [2024-12-05 13:02:53.021121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:53.271 [2024-12-05 13:02:53.021131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:31:53.271 [2024-12-05 13:02:53.021140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.027559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.027615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:53.271 [2024-12-05 13:02:53.027630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.027640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.027713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.027724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:53.271 [2024-12-05 13:02:53.027732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.027742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.027914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.027931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:53.271 [2024-12-05 13:02:53.027939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.027951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.027969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.027980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:53.271 [2024-12-05 13:02:53.027987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.027997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.040404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.040481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:53.271 [2024-12-05 13:02:53.040497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.040507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.050149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.050230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:53.271 [2024-12-05 13:02:53.050243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.050253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.050347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.050366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:53.271 [2024-12-05 13:02:53.050375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.050385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.050423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.050435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:53.271 [2024-12-05 13:02:53.050443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.050452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.050525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.050536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:53.271 [2024-12-05 13:02:53.050548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.050557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.050599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.050611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:53.271 [2024-12-05 13:02:53.050619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.050628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.050686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.050701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:53.271 [2024-12-05 13:02:53.050709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.050718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.050768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:53.271 [2024-12-05 13:02:53.050781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:53.271 [2024-12-05 13:02:53.050790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:53.271 [2024-12-05 13:02:53.050801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.271 [2024-12-05 13:02:53.050966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.588 ms, result 0 00:31:53.271 true 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92294 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92294 ']' 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92294 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92294 00:31:53.271 killing process with pid 92294 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92294' 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 92294 00:31:53.271 13:02:53 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 92294 00:32:03.241 13:03:02 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:06.527 262144+0 records in 00:32:06.527 262144+0 records out 00:32:06.527 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.09673 s, 262 MB/s 00:32:06.527 13:03:06 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:09.066 13:03:08 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:09.067 [2024-12-05 13:03:08.506157] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:32:09.067 [2024-12-05 13:03:08.506678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92497 ] 00:32:09.067 [2024-12-05 13:03:08.667858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.067 [2024-12-05 13:03:08.692662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.067 [2024-12-05 13:03:08.796900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:09.067 [2024-12-05 13:03:08.796978] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:09.328 [2024-12-05 13:03:08.955645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.955713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:09.328 [2024-12-05 13:03:08.955727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:09.328 [2024-12-05 13:03:08.955736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.955787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.955798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:09.328 [2024-12-05 13:03:08.955826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:09.328 [2024-12-05 13:03:08.955834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.955857] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:09.328 [2024-12-05 13:03:08.956102] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:09.328 [2024-12-05 13:03:08.956117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.956129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:09.328 [2024-12-05 13:03:08.956144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:32:09.328 [2024-12-05 13:03:08.956152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.957468] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:09.328 [2024-12-05 13:03:08.960534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.960567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:09.328 [2024-12-05 13:03:08.960587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.067 ms 00:32:09.328 [2024-12-05 13:03:08.960601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.960655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.960664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:09.328 [2024-12-05 13:03:08.960675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:32:09.328 [2024-12-05 13:03:08.960683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.967089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.967117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:09.328 [2024-12-05 13:03:08.967133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.361 ms 00:32:09.328 [2024-12-05 13:03:08.967141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.967228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.967237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:09.328 [2024-12-05 13:03:08.967246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:09.328 [2024-12-05 13:03:08.967258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.967300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.967309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:09.328 [2024-12-05 13:03:08.967317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:09.328 [2024-12-05 13:03:08.967329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.967355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:09.328 [2024-12-05 13:03:08.969087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.969111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:09.328 [2024-12-05 13:03:08.969120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:32:09.328 [2024-12-05 13:03:08.969128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.969165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.969174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:09.328 [2024-12-05 13:03:08.969187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:09.328 [2024-12-05 13:03:08.969196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.328 [2024-12-05 13:03:08.969224] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:09.328 [2024-12-05 13:03:08.969250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:09.328 [2024-12-05 13:03:08.969288] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:09.328 [2024-12-05 13:03:08.969310] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:09.328 [2024-12-05 13:03:08.969438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:09.328 [2024-12-05 13:03:08.969458] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:09.328 [2024-12-05 13:03:08.969472] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:09.328 [2024-12-05 13:03:08.969485] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:09.328 [2024-12-05 13:03:08.969494] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:09.328 [2024-12-05 13:03:08.969506] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:09.328 [2024-12-05 13:03:08.969513] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:09.328 [2024-12-05 13:03:08.969523] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:09.328 [2024-12-05 13:03:08.969531] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:09.328 [2024-12-05 13:03:08.969539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.328 [2024-12-05 13:03:08.969553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:09.328 [2024-12-05 13:03:08.969560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:32:09.329 [2024-12-05 13:03:08.969572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.329 [2024-12-05 13:03:08.969663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.329 [2024-12-05 13:03:08.969675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:09.329 [2024-12-05 13:03:08.969689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:32:09.329 [2024-12-05 13:03:08.969696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.329 [2024-12-05 13:03:08.969859] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:09.329 [2024-12-05 13:03:08.969872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:09.329 [2024-12-05 13:03:08.969881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:09.329 [2024-12-05 13:03:08.969896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.329 [2024-12-05 13:03:08.969905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:09.329 [2024-12-05 13:03:08.969913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:09.329 [2024-12-05 13:03:08.969921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:09.329 [2024-12-05 13:03:08.969930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:09.329 [2024-12-05 13:03:08.969938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:09.329 [2024-12-05 13:03:08.969946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:09.329 [2024-12-05 13:03:08.969953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:09.329 [2024-12-05 13:03:08.969997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:09.329 [2024-12-05 13:03:08.970005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:09.329 [2024-12-05 13:03:08.970018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:09.329 [2024-12-05 13:03:08.970027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:09.329 [2024-12-05 13:03:08.970036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:09.329 [2024-12-05 13:03:08.970057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:09.329 [2024-12-05 13:03:08.970068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:09.329 [2024-12-05 13:03:08.970088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.329 [2024-12-05 13:03:08.970108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:09.329 [2024-12-05 13:03:08.970116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.329 [2024-12-05 13:03:08.970134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:09.329 [2024-12-05 13:03:08.970147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.329 [2024-12-05 13:03:08.970171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:09.329 [2024-12-05 13:03:08.970183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.329 [2024-12-05 13:03:08.970198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:09.329 [2024-12-05 13:03:08.970206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:09.329 [2024-12-05 13:03:08.970221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:09.329 [2024-12-05 13:03:08.970229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:09.329 [2024-12-05 13:03:08.970236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:09.329 [2024-12-05 13:03:08.970244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:09.329 [2024-12-05 13:03:08.970252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:09.329 [2024-12-05 13:03:08.970259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:09.329 [2024-12-05 13:03:08.970272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:09.329 [2024-12-05 13:03:08.970279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970287] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:09.329 [2024-12-05 13:03:08.970297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:09.329 [2024-12-05 13:03:08.970304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:09.329 [2024-12-05 13:03:08.970313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.329 [2024-12-05 13:03:08.970321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:09.329 [2024-12-05 13:03:08.970328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:09.329 [2024-12-05 13:03:08.970335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:09.329 [2024-12-05 13:03:08.970341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:09.329 [2024-12-05 13:03:08.970347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:09.329 [2024-12-05 13:03:08.970354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:09.329 [2024-12-05 13:03:08.970363] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:09.329 [2024-12-05 13:03:08.970372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.329 [2024-12-05 13:03:08.970380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:09.329 [2024-12-05 13:03:08.970388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:09.329 [2024-12-05 13:03:08.970395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:09.329 [2024-12-05 13:03:08.970402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:09.329 [2024-12-05 13:03:08.970411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:09.329 [2024-12-05 13:03:08.970419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:09.329 [2024-12-05 13:03:08.970426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:09.329 [2024-12-05 13:03:08.970433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:09.329 [2024-12-05 13:03:08.970440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:09.329 [2024-12-05 13:03:08.970447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:09.329 [2024-12-05 13:03:08.970454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:09.329 [2024-12-05 13:03:08.970462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:09.329 [2024-12-05 13:03:08.970469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:09.329 [2024-12-05 13:03:08.970476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:09.329 [2024-12-05 13:03:08.970484] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:09.329 [2024-12-05 13:03:08.970492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.329 [2024-12-05 13:03:08.970500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:09.329 [2024-12-05 13:03:08.970507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:09.329 [2024-12-05 13:03:08.970514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:09.329 [2024-12-05 13:03:08.970521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:09.329 [2024-12-05 13:03:08.970531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:08.970538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:09.330 [2024-12-05 13:03:08.970545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:32:09.330 [2024-12-05 13:03:08.970555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:08.981998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:08.982037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:09.330 [2024-12-05 13:03:08.982048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.399 ms 00:32:09.330 [2024-12-05 13:03:08.982058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:08.982142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:08.982151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:09.330 [2024-12-05 13:03:08.982164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:09.330 [2024-12-05 13:03:08.982171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.005424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.005497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:09.330 [2024-12-05 13:03:09.005522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.202 ms 00:32:09.330 [2024-12-05 13:03:09.005539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.005613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.005633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:09.330 [2024-12-05 13:03:09.005650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:09.330 [2024-12-05 13:03:09.005666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.006308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.006359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:09.330 [2024-12-05 13:03:09.006380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:32:09.330 [2024-12-05 13:03:09.006398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.006652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.006671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:09.330 [2024-12-05 13:03:09.006695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:32:09.330 [2024-12-05 13:03:09.006710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.014085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.014115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:09.330 [2024-12-05 13:03:09.014125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.341 ms 00:32:09.330 [2024-12-05 13:03:09.014133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.017153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:09.330 [2024-12-05 13:03:09.017187] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:09.330 [2024-12-05 13:03:09.017199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.017208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:09.330 [2024-12-05 13:03:09.017216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.978 ms 00:32:09.330 [2024-12-05 13:03:09.017224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.031683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.031738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:09.330 [2024-12-05 13:03:09.031749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.420 ms 00:32:09.330 [2024-12-05 13:03:09.031757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.033861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.033888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:09.330 [2024-12-05 13:03:09.033898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:32:09.330 [2024-12-05 13:03:09.033905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.035617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.035734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:09.330 [2024-12-05 13:03:09.035750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:32:09.330 [2024-12-05 13:03:09.035758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.036115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.036129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:09.330 [2024-12-05 13:03:09.036139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:32:09.330 [2024-12-05 13:03:09.036147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.054746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.054987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:09.330 [2024-12-05 13:03:09.055016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.566 ms 00:32:09.330 [2024-12-05 13:03:09.055025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.062594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:09.330 [2024-12-05 13:03:09.065357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.065394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:09.330 [2024-12-05 13:03:09.065407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.293 ms 00:32:09.330 [2024-12-05 13:03:09.065416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.065502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.065513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:09.330 [2024-12-05 13:03:09.065522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:09.330 [2024-12-05 13:03:09.065531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.065597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.065607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:09.330 [2024-12-05 13:03:09.065619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:09.330 [2024-12-05 13:03:09.065627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.065647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.065655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:09.330 [2024-12-05 13:03:09.065664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:09.330 [2024-12-05 13:03:09.065672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.065708] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:09.330 [2024-12-05 13:03:09.065721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.065729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:09.330 [2024-12-05 13:03:09.065737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:09.330 [2024-12-05 13:03:09.065747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.070031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.070138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:09.330 [2024-12-05 13:03:09.070190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:32:09.330 [2024-12-05 13:03:09.070215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.070344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.330 [2024-12-05 13:03:09.070429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:09.330 [2024-12-05 13:03:09.070455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:09.330 [2024-12-05 13:03:09.070479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.330 [2024-12-05 13:03:09.071516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.446 ms, result 0 00:32:10.271  [2024-12-05T13:03:11.509Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-05T13:03:12.104Z] Copying: 33/1024 [MB] (17 MBps) [2024-12-05T13:03:13.487Z] Copying: 51/1024 [MB] (17 MBps) [2024-12-05T13:03:14.436Z] Copying: 71/1024 [MB] (20 MBps) [2024-12-05T13:03:15.378Z] Copying: 84/1024 [MB] (12 MBps) [2024-12-05T13:03:16.318Z] Copying: 100/1024 [MB] (16 MBps) [2024-12-05T13:03:17.278Z] Copying: 123/1024 [MB] (22 MBps) [2024-12-05T13:03:18.219Z] Copying: 143/1024 [MB] (20 MBps) [2024-12-05T13:03:19.161Z] Copying: 170/1024 [MB] (27 MBps) [2024-12-05T13:03:20.106Z] Copying: 192/1024 [MB] (21 MBps) [2024-12-05T13:03:21.496Z] Copying: 212/1024 [MB] (19 MBps) [2024-12-05T13:03:22.441Z] Copying: 223/1024 [MB] (11 MBps) [2024-12-05T13:03:23.384Z] Copying: 234/1024 [MB] (10 MBps) [2024-12-05T13:03:24.326Z] Copying: 245/1024 [MB] (10 MBps) [2024-12-05T13:03:25.265Z] Copying: 255/1024 [MB] (10 MBps) [2024-12-05T13:03:26.206Z] Copying: 265/1024 [MB] (10 MBps) [2024-12-05T13:03:27.148Z] Copying: 276/1024 [MB] (10 MBps) [2024-12-05T13:03:28.091Z] Copying: 286/1024 [MB] (10 MBps) [2024-12-05T13:03:29.476Z] Copying: 296/1024 [MB] (10 MBps) [2024-12-05T13:03:30.417Z] Copying: 308/1024 [MB] (12 MBps) [2024-12-05T13:03:31.351Z] Copying: 319/1024 [MB] (11 MBps) [2024-12-05T13:03:32.284Z] Copying: 335/1024 [MB] (15 MBps) [2024-12-05T13:03:33.217Z] Copying: 370/1024 [MB] (35 MBps) [2024-12-05T13:03:34.168Z] Copying: 415/1024 [MB] (44 MBps) [2024-12-05T13:03:35.102Z] Copying: 459/1024 [MB] (44 MBps) [2024-12-05T13:03:36.471Z] Copying: 505/1024 [MB] (45 MBps) [2024-12-05T13:03:37.405Z] Copying: 550/1024 [MB] (45 MBps) [2024-12-05T13:03:38.340Z] Copying: 596/1024 [MB] (45 MBps) [2024-12-05T13:03:39.274Z] Copying: 640/1024 [MB] (43 MBps) [2024-12-05T13:03:40.212Z] Copying: 683/1024 [MB] (43 MBps) [2024-12-05T13:03:41.179Z] Copying: 728/1024 [MB] (44 MBps) [2024-12-05T13:03:42.111Z] Copying: 773/1024 [MB] (44 MBps) [2024-12-05T13:03:43.482Z] Copying: 818/1024 [MB] (45 MBps) [2024-12-05T13:03:44.413Z] Copying: 864/1024 [MB] (46 MBps) [2024-12-05T13:03:45.342Z] Copying: 910/1024 [MB] (45 MBps) [2024-12-05T13:03:46.371Z] Copying: 954/1024 [MB] (44 MBps) [2024-12-05T13:03:46.938Z] Copying: 999/1024 [MB] (44 MBps) [2024-12-05T13:03:46.938Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-05 13:03:46.629382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.086 [2024-12-05 13:03:46.629433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:47.086 [2024-12-05 13:03:46.629447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:47.086 [2024-12-05 13:03:46.629464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.086 [2024-12-05 13:03:46.629492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:47.086 [2024-12-05 13:03:46.630079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.086 [2024-12-05 13:03:46.630099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:47.086 [2024-12-05 13:03:46.630108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:32:47.086 [2024-12-05 13:03:46.630116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.086 [2024-12-05 13:03:46.631558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.086 [2024-12-05 13:03:46.631592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:47.086 [2024-12-05 13:03:46.631602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:32:47.086 [2024-12-05 13:03:46.631609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.086 [2024-12-05 13:03:46.631638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.086 [2024-12-05 13:03:46.631647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:47.086 [2024-12-05 13:03:46.631656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:47.086 [2024-12-05 13:03:46.631663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.086 [2024-12-05 13:03:46.631711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.086 [2024-12-05 13:03:46.631720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:47.086 [2024-12-05 13:03:46.631733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:47.086 [2024-12-05 13:03:46.631740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.086 [2024-12-05 13:03:46.631753] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:47.086 [2024-12-05 13:03:46.631768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:47.086 [2024-12-05 13:03:46.631955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.631963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.631970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.631977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.631986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.631993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:47.087 [2024-12-05 13:03:46.632446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:47.088 [2024-12-05 13:03:46.632542] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:47.088 [2024-12-05 13:03:46.632550] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d3bffc2-01da-4dfa-8846-a146b1722d32 00:32:47.088 [2024-12-05 13:03:46.632561] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:47.088 [2024-12-05 13:03:46.632568] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:47.088 [2024-12-05 13:03:46.632578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:47.088 [2024-12-05 13:03:46.632585] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:47.088 [2024-12-05 13:03:46.632592] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:47.088 [2024-12-05 13:03:46.632599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:47.088 [2024-12-05 13:03:46.632606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:47.088 [2024-12-05 13:03:46.632613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:47.088 [2024-12-05 13:03:46.632619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:47.088 [2024-12-05 13:03:46.632626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.088 [2024-12-05 13:03:46.632632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:47.088 [2024-12-05 13:03:46.632643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:32:47.088 [2024-12-05 13:03:46.632650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.634671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.088 [2024-12-05 13:03:46.634773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:47.088 [2024-12-05 13:03:46.634841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.007 ms 00:32:47.088 [2024-12-05 13:03:46.634909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.635015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.088 [2024-12-05 13:03:46.635059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:47.088 [2024-12-05 13:03:46.635123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:32:47.088 [2024-12-05 13:03:46.635145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.641148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.641257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:47.088 [2024-12-05 13:03:46.641322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.088 [2024-12-05 13:03:46.641370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.641449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.641509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:47.088 [2024-12-05 13:03:46.641530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.088 [2024-12-05 13:03:46.641549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.641609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.641733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:47.088 [2024-12-05 13:03:46.641757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.088 [2024-12-05 13:03:46.641776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.641849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.641874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:47.088 [2024-12-05 13:03:46.641908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.088 [2024-12-05 13:03:46.641964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.653491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.653630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:47.088 [2024-12-05 13:03:46.653680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.088 [2024-12-05 13:03:46.653703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.662549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.662692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:47.088 [2024-12-05 13:03:46.662707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.088 [2024-12-05 13:03:46.662722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.662795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.662821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:47.088 [2024-12-05 13:03:46.662830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.088 [2024-12-05 13:03:46.662838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.662869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.662877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:47.088 [2024-12-05 13:03:46.662886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.088 [2024-12-05 13:03:46.662896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.088 [2024-12-05 13:03:46.662947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.088 [2024-12-05 13:03:46.662956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:47.089 [2024-12-05 13:03:46.662968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.089 [2024-12-05 13:03:46.662975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.089 [2024-12-05 13:03:46.663001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.089 [2024-12-05 13:03:46.663010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:47.089 [2024-12-05 13:03:46.663019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.089 [2024-12-05 13:03:46.663026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.089 [2024-12-05 13:03:46.663065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.089 [2024-12-05 13:03:46.663074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:47.089 [2024-12-05 13:03:46.663089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.089 [2024-12-05 13:03:46.663097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.089 [2024-12-05 13:03:46.663142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:47.089 [2024-12-05 13:03:46.663151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:47.089 [2024-12-05 13:03:46.663160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:47.089 [2024-12-05 13:03:46.663170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.089 [2024-12-05 13:03:46.663293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 33.879 ms, result 0 00:32:48.460 00:32:48.460 00:32:48.460 13:03:48 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:48.460 [2024-12-05 13:03:48.127385] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:32:48.460 [2024-12-05 13:03:48.127656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92899 ] 00:32:48.460 [2024-12-05 13:03:48.288127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.718 [2024-12-05 13:03:48.312570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.718 [2024-12-05 13:03:48.415686] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:48.718 [2024-12-05 13:03:48.415768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:48.718 [2024-12-05 13:03:48.569497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.569687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:48.978 [2024-12-05 13:03:48.569708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:48.978 [2024-12-05 13:03:48.569717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.569775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.569789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:48.978 [2024-12-05 13:03:48.569797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:48.978 [2024-12-05 13:03:48.569805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.569846] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:48.978 [2024-12-05 13:03:48.570088] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:48.978 [2024-12-05 13:03:48.570103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.570113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:48.978 [2024-12-05 13:03:48.570124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:32:48.978 [2024-12-05 13:03:48.570132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.570389] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:48.978 [2024-12-05 13:03:48.570411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.570421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:48.978 [2024-12-05 13:03:48.570431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:48.978 [2024-12-05 13:03:48.570442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.570498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.570507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:48.978 [2024-12-05 13:03:48.570516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:48.978 [2024-12-05 13:03:48.570525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.570753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.570764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:48.978 [2024-12-05 13:03:48.570773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:32:48.978 [2024-12-05 13:03:48.570785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.570878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.570891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:48.978 [2024-12-05 13:03:48.570899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:32:48.978 [2024-12-05 13:03:48.570906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.570927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.570936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:48.978 [2024-12-05 13:03:48.570944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:48.978 [2024-12-05 13:03:48.570951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.570977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:48.978 [2024-12-05 13:03:48.572676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.572696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:48.978 [2024-12-05 13:03:48.572705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:32:48.978 [2024-12-05 13:03:48.572717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.572748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.572757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:48.978 [2024-12-05 13:03:48.572766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:48.978 [2024-12-05 13:03:48.572773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.572793] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:48.978 [2024-12-05 13:03:48.572963] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:48.978 [2024-12-05 13:03:48.573033] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:48.978 [2024-12-05 13:03:48.573075] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:48.978 [2024-12-05 13:03:48.573200] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:48.978 [2024-12-05 13:03:48.573370] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:48.978 [2024-12-05 13:03:48.573407] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:48.978 [2024-12-05 13:03:48.573438] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:48.978 [2024-12-05 13:03:48.573474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:48.978 [2024-12-05 13:03:48.573505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:48.978 [2024-12-05 13:03:48.573524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:48.978 [2024-12-05 13:03:48.573588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:48.978 [2024-12-05 13:03:48.573609] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:48.978 [2024-12-05 13:03:48.573628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.573647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:48.978 [2024-12-05 13:03:48.573666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:32:48.978 [2024-12-05 13:03:48.573684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.573794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.978 [2024-12-05 13:03:48.573836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:48.978 [2024-12-05 13:03:48.573856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:48.978 [2024-12-05 13:03:48.573874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.978 [2024-12-05 13:03:48.574008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:48.978 [2024-12-05 13:03:48.574036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:48.978 [2024-12-05 13:03:48.574056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:48.978 [2024-12-05 13:03:48.574079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.978 [2024-12-05 13:03:48.574134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:48.978 [2024-12-05 13:03:48.574164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:48.979 [2024-12-05 13:03:48.574202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:48.979 [2024-12-05 13:03:48.574253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:48.979 [2024-12-05 13:03:48.574292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:48.979 [2024-12-05 13:03:48.574310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:48.979 [2024-12-05 13:03:48.574349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:48.979 [2024-12-05 13:03:48.574369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:48.979 [2024-12-05 13:03:48.574460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:48.979 [2024-12-05 13:03:48.574480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:48.979 [2024-12-05 13:03:48.574516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:48.979 [2024-12-05 13:03:48.574533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:48.979 [2024-12-05 13:03:48.574573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:48.979 [2024-12-05 13:03:48.574637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:48.979 [2024-12-05 13:03:48.574658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:48.979 [2024-12-05 13:03:48.574694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:48.979 [2024-12-05 13:03:48.574711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:48.979 [2024-12-05 13:03:48.574746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:48.979 [2024-12-05 13:03:48.574764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:48.979 [2024-12-05 13:03:48.574844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:48.979 [2024-12-05 13:03:48.574862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:48.979 [2024-12-05 13:03:48.574880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:48.979 [2024-12-05 13:03:48.574899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:48.979 [2024-12-05 13:03:48.574922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:48.979 [2024-12-05 13:03:48.574941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:48.979 [2024-12-05 13:03:48.574990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:48.979 [2024-12-05 13:03:48.575012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:48.979 [2024-12-05 13:03:48.575029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.979 [2024-12-05 13:03:48.575048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:48.979 [2024-12-05 13:03:48.575065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:48.979 [2024-12-05 13:03:48.575083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.979 [2024-12-05 13:03:48.575101] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:48.979 [2024-12-05 13:03:48.575184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:48.979 [2024-12-05 13:03:48.575203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:48.979 [2024-12-05 13:03:48.575224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:48.979 [2024-12-05 13:03:48.575243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:48.979 [2024-12-05 13:03:48.575260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:48.979 [2024-12-05 13:03:48.575278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:48.979 [2024-12-05 13:03:48.575320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:48.979 [2024-12-05 13:03:48.575345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:48.979 [2024-12-05 13:03:48.575364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:48.979 [2024-12-05 13:03:48.575455] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:48.979 [2024-12-05 13:03:48.575489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:48.979 [2024-12-05 13:03:48.575519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:48.979 [2024-12-05 13:03:48.575548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:48.979 [2024-12-05 13:03:48.575576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:48.979 [2024-12-05 13:03:48.575632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:48.979 [2024-12-05 13:03:48.575663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:48.979 [2024-12-05 13:03:48.575690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:48.979 [2024-12-05 13:03:48.575718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:48.979 [2024-12-05 13:03:48.575746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:48.979 [2024-12-05 13:03:48.575826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:48.979 [2024-12-05 13:03:48.575858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:48.979 [2024-12-05 13:03:48.575887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:48.979 [2024-12-05 13:03:48.575916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:48.979 [2024-12-05 13:03:48.575948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:48.979 [2024-12-05 13:03:48.576024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:48.979 [2024-12-05 13:03:48.576052] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:48.979 [2024-12-05 13:03:48.576081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:48.979 [2024-12-05 13:03:48.576114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:48.979 [2024-12-05 13:03:48.576171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:48.979 [2024-12-05 13:03:48.576224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:48.979 [2024-12-05 13:03:48.576255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:48.979 [2024-12-05 13:03:48.576307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.979 [2024-12-05 13:03:48.576328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:48.979 [2024-12-05 13:03:48.576348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.367 ms 00:32:48.979 [2024-12-05 13:03:48.576366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.979 [2024-12-05 13:03:48.584266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.979 [2024-12-05 13:03:48.584374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:48.979 [2024-12-05 13:03:48.584425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.770 ms 00:32:48.979 [2024-12-05 13:03:48.584447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.979 [2024-12-05 13:03:48.584540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.979 [2024-12-05 13:03:48.584561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:48.979 [2024-12-05 13:03:48.584606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:32:48.979 [2024-12-05 13:03:48.584634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.979 [2024-12-05 13:03:48.607260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.979 [2024-12-05 13:03:48.607450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:48.979 [2024-12-05 13:03:48.607534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.560 ms 00:32:48.979 [2024-12-05 13:03:48.607600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.979 [2024-12-05 13:03:48.607690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.979 [2024-12-05 13:03:48.607795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:48.979 [2024-12-05 13:03:48.607861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:48.979 [2024-12-05 13:03:48.607970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.979 [2024-12-05 13:03:48.608129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.979 [2024-12-05 13:03:48.608194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:48.979 [2024-12-05 13:03:48.608286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:48.979 [2024-12-05 13:03:48.608320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.979 [2024-12-05 13:03:48.608518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.979 [2024-12-05 13:03:48.608602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:48.980 [2024-12-05 13:03:48.608672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:32:48.980 [2024-12-05 13:03:48.608687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.616126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.616263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:48.980 [2024-12-05 13:03:48.616348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.410 ms 00:32:48.980 [2024-12-05 13:03:48.616422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.616600] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:48.980 [2024-12-05 13:03:48.616735] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:48.980 [2024-12-05 13:03:48.616837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.616976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:48.980 [2024-12-05 13:03:48.617060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:32:48.980 [2024-12-05 13:03:48.617098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.629401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.629502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:48.980 [2024-12-05 13:03:48.629550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.231 ms 00:32:48.980 [2024-12-05 13:03:48.629576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.629701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.629751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:48.980 [2024-12-05 13:03:48.629851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:32:48.980 [2024-12-05 13:03:48.629880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.629948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.629978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:48.980 [2024-12-05 13:03:48.630037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:48.980 [2024-12-05 13:03:48.630066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.630382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.630456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:48.980 [2024-12-05 13:03:48.630502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:32:48.980 [2024-12-05 13:03:48.630528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.630558] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:48.980 [2024-12-05 13:03:48.630574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.630585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:48.980 [2024-12-05 13:03:48.630593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:48.980 [2024-12-05 13:03:48.630599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.639130] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:48.980 [2024-12-05 13:03:48.639259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.639269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:48.980 [2024-12-05 13:03:48.639278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.641 ms 00:32:48.980 [2024-12-05 13:03:48.639287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.641621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.641717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:48.980 [2024-12-05 13:03:48.641730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.317 ms 00:32:48.980 [2024-12-05 13:03:48.641738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.641825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.641841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:48.980 [2024-12-05 13:03:48.641853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:48.980 [2024-12-05 13:03:48.641863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.641917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.641927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:48.980 [2024-12-05 13:03:48.641935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:48.980 [2024-12-05 13:03:48.641942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.641980] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:48.980 [2024-12-05 13:03:48.641993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.642000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:48.980 [2024-12-05 13:03:48.642008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:48.980 [2024-12-05 13:03:48.642015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.645980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.646015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:48.980 [2024-12-05 13:03:48.646026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.946 ms 00:32:48.980 [2024-12-05 13:03:48.646033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.646171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.980 [2024-12-05 13:03:48.646181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:48.980 [2024-12-05 13:03:48.646190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:48.980 [2024-12-05 13:03:48.646197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.980 [2024-12-05 13:03:48.647165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 77.239 ms, result 0 00:32:50.353  [2024-12-05T13:03:51.138Z] Copying: 49/1024 [MB] (49 MBps) [2024-12-05T13:03:52.071Z] Copying: 97/1024 [MB] (47 MBps) [2024-12-05T13:03:53.004Z] Copying: 145/1024 [MB] (48 MBps) [2024-12-05T13:03:53.936Z] Copying: 194/1024 [MB] (49 MBps) [2024-12-05T13:03:54.869Z] Copying: 244/1024 [MB] (49 MBps) [2024-12-05T13:03:56.241Z] Copying: 292/1024 [MB] (47 MBps) [2024-12-05T13:03:57.177Z] Copying: 342/1024 [MB] (50 MBps) [2024-12-05T13:03:58.127Z] Copying: 392/1024 [MB] (50 MBps) [2024-12-05T13:03:59.062Z] Copying: 438/1024 [MB] (45 MBps) [2024-12-05T13:03:59.995Z] Copying: 485/1024 [MB] (46 MBps) [2024-12-05T13:04:00.930Z] Copying: 531/1024 [MB] (46 MBps) [2024-12-05T13:04:01.862Z] Copying: 579/1024 [MB] (47 MBps) [2024-12-05T13:04:03.264Z] Copying: 627/1024 [MB] (48 MBps) [2024-12-05T13:04:03.830Z] Copying: 675/1024 [MB] (47 MBps) [2024-12-05T13:04:05.202Z] Copying: 722/1024 [MB] (47 MBps) [2024-12-05T13:04:06.134Z] Copying: 769/1024 [MB] (46 MBps) [2024-12-05T13:04:07.095Z] Copying: 816/1024 [MB] (47 MBps) [2024-12-05T13:04:08.050Z] Copying: 866/1024 [MB] (49 MBps) [2024-12-05T13:04:08.992Z] Copying: 915/1024 [MB] (49 MBps) [2024-12-05T13:04:09.921Z] Copying: 964/1024 [MB] (49 MBps) [2024-12-05T13:04:10.178Z] Copying: 1013/1024 [MB] (49 MBps) [2024-12-05T13:04:11.112Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-12-05 13:04:10.850244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.260 [2024-12-05 13:04:10.850320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:11.260 [2024-12-05 13:04:10.850336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:11.260 [2024-12-05 13:04:10.850346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.260 [2024-12-05 13:04:10.850374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:11.260 [2024-12-05 13:04:10.850973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.260 [2024-12-05 13:04:10.850993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:11.260 [2024-12-05 13:04:10.851004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:33:11.260 [2024-12-05 13:04:10.851013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.260 [2024-12-05 13:04:10.851243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.260 [2024-12-05 13:04:10.851254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:11.260 [2024-12-05 13:04:10.851263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:33:11.260 [2024-12-05 13:04:10.851271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.260 [2024-12-05 13:04:10.851306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.260 [2024-12-05 13:04:10.851316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:11.260 [2024-12-05 13:04:10.851324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:11.260 [2024-12-05 13:04:10.851339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.260 [2024-12-05 13:04:10.851395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.260 [2024-12-05 13:04:10.851414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:11.260 [2024-12-05 13:04:10.851423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:11.260 [2024-12-05 13:04:10.851432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.260 [2024-12-05 13:04:10.851447] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:11.260 [2024-12-05 13:04:10.851464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.851997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:11.260 [2024-12-05 13:04:10.852226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:11.261 [2024-12-05 13:04:10.852493] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:11.261 [2024-12-05 13:04:10.852501] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d3bffc2-01da-4dfa-8846-a146b1722d32 00:33:11.261 [2024-12-05 13:04:10.852510] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:11.261 [2024-12-05 13:04:10.852517] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:11.261 [2024-12-05 13:04:10.852524] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:11.261 [2024-12-05 13:04:10.852536] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:11.261 [2024-12-05 13:04:10.852543] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:11.261 [2024-12-05 13:04:10.852550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:11.261 [2024-12-05 13:04:10.852557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:11.261 [2024-12-05 13:04:10.852567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:11.261 [2024-12-05 13:04:10.852574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:11.261 [2024-12-05 13:04:10.852581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.261 [2024-12-05 13:04:10.852588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:11.261 [2024-12-05 13:04:10.852596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:33:11.261 [2024-12-05 13:04:10.852606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.854467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.261 [2024-12-05 13:04:10.854497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:11.261 [2024-12-05 13:04:10.854509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:33:11.261 [2024-12-05 13:04:10.854517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.854618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.261 [2024-12-05 13:04:10.854627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:11.261 [2024-12-05 13:04:10.854639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:11.261 [2024-12-05 13:04:10.854647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.860739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.860774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.261 [2024-12-05 13:04:10.860789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.860802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.860905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.860920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.261 [2024-12-05 13:04:10.860932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.860943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.861030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.861044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.261 [2024-12-05 13:04:10.861052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.861063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.861084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.861092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.261 [2024-12-05 13:04:10.861101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.861111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.874089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.874128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.261 [2024-12-05 13:04:10.874143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.874152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.883855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.883895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.261 [2024-12-05 13:04:10.883915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.883923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.883980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.883989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:11.261 [2024-12-05 13:04:10.883998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.884007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.884037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.884046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:11.261 [2024-12-05 13:04:10.884054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.884069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.884128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.884137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:11.261 [2024-12-05 13:04:10.884145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.884152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.884176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.884184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:11.261 [2024-12-05 13:04:10.884192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.884199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.884238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.884247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:11.261 [2024-12-05 13:04:10.884254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.261 [2024-12-05 13:04:10.884265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.261 [2024-12-05 13:04:10.884317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.261 [2024-12-05 13:04:10.884327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:11.262 [2024-12-05 13:04:10.884335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.262 [2024-12-05 13:04:10.884342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.262 [2024-12-05 13:04:10.884466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 34.194 ms, result 0 00:33:11.519 00:33:11.519 00:33:11.519 13:04:11 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:14.050 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:14.050 13:04:13 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:14.050 [2024-12-05 13:04:13.378875] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:33:14.050 [2024-12-05 13:04:13.379126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93159 ] 00:33:14.050 [2024-12-05 13:04:13.536662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.050 [2024-12-05 13:04:13.561643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.050 [2024-12-05 13:04:13.666415] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:14.050 [2024-12-05 13:04:13.666497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:14.050 [2024-12-05 13:04:13.821623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.821691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:14.051 [2024-12-05 13:04:13.821705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:14.051 [2024-12-05 13:04:13.821714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.821772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.821783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:14.051 [2024-12-05 13:04:13.821791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:14.051 [2024-12-05 13:04:13.821799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.821844] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:14.051 [2024-12-05 13:04:13.822106] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:14.051 [2024-12-05 13:04:13.822122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.822132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:14.051 [2024-12-05 13:04:13.822143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:33:14.051 [2024-12-05 13:04:13.822150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.822443] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:14.051 [2024-12-05 13:04:13.822472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.822487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:14.051 [2024-12-05 13:04:13.822496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:14.051 [2024-12-05 13:04:13.822507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.822587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.822598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:14.051 [2024-12-05 13:04:13.822606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:14.051 [2024-12-05 13:04:13.822614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.822870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.822881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:14.051 [2024-12-05 13:04:13.822890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:33:14.051 [2024-12-05 13:04:13.822900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.822978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.822991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:14.051 [2024-12-05 13:04:13.822998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:14.051 [2024-12-05 13:04:13.823005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.823032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.823040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:14.051 [2024-12-05 13:04:13.823054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:14.051 [2024-12-05 13:04:13.823061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.823083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:14.051 [2024-12-05 13:04:13.824860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.824898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:14.051 [2024-12-05 13:04:13.824908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.783 ms 00:33:14.051 [2024-12-05 13:04:13.824915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.824946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.824955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:14.051 [2024-12-05 13:04:13.824967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:14.051 [2024-12-05 13:04:13.824974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.825000] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:14.051 [2024-12-05 13:04:13.825025] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:14.051 [2024-12-05 13:04:13.825059] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:14.051 [2024-12-05 13:04:13.825074] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:14.051 [2024-12-05 13:04:13.825178] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:14.051 [2024-12-05 13:04:13.825199] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:14.051 [2024-12-05 13:04:13.825210] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:14.051 [2024-12-05 13:04:13.825221] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825236] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825244] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:14.051 [2024-12-05 13:04:13.825252] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:14.051 [2024-12-05 13:04:13.825259] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:14.051 [2024-12-05 13:04:13.825266] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:14.051 [2024-12-05 13:04:13.825274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.825280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:14.051 [2024-12-05 13:04:13.825288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:33:14.051 [2024-12-05 13:04:13.825295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.825377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.051 [2024-12-05 13:04:13.825387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:14.051 [2024-12-05 13:04:13.825395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:14.051 [2024-12-05 13:04:13.825401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.051 [2024-12-05 13:04:13.825516] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:14.051 [2024-12-05 13:04:13.825533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:14.051 [2024-12-05 13:04:13.825543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:14.051 [2024-12-05 13:04:13.825578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:14.051 [2024-12-05 13:04:13.825603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:14.051 [2024-12-05 13:04:13.825618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:14.051 [2024-12-05 13:04:13.825626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:14.051 [2024-12-05 13:04:13.825633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:14.051 [2024-12-05 13:04:13.825641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:14.051 [2024-12-05 13:04:13.825649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:14.051 [2024-12-05 13:04:13.825656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:14.051 [2024-12-05 13:04:13.825672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:14.051 [2024-12-05 13:04:13.825699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:14.051 [2024-12-05 13:04:13.825721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:14.051 [2024-12-05 13:04:13.825744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:14.051 [2024-12-05 13:04:13.825764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:14.051 [2024-12-05 13:04:13.825778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:14.051 [2024-12-05 13:04:13.825785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:14.051 [2024-12-05 13:04:13.825798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:14.051 [2024-12-05 13:04:13.825826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:14.051 [2024-12-05 13:04:13.825833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:14.051 [2024-12-05 13:04:13.825842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:14.051 [2024-12-05 13:04:13.825849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:14.051 [2024-12-05 13:04:13.825856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:14.051 [2024-12-05 13:04:13.825863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:14.052 [2024-12-05 13:04:13.825869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:14.052 [2024-12-05 13:04:13.825876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:14.052 [2024-12-05 13:04:13.825883] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:14.052 [2024-12-05 13:04:13.825890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:14.052 [2024-12-05 13:04:13.825898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:14.052 [2024-12-05 13:04:13.825921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:14.052 [2024-12-05 13:04:13.825929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:14.052 [2024-12-05 13:04:13.825936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:14.052 [2024-12-05 13:04:13.825943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:14.052 [2024-12-05 13:04:13.825950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:14.052 [2024-12-05 13:04:13.825959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:14.052 [2024-12-05 13:04:13.825966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:14.052 [2024-12-05 13:04:13.825974] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:14.052 [2024-12-05 13:04:13.825987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:14.052 [2024-12-05 13:04:13.825999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:14.052 [2024-12-05 13:04:13.826007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:14.052 [2024-12-05 13:04:13.826015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:14.052 [2024-12-05 13:04:13.826022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:14.052 [2024-12-05 13:04:13.826029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:14.052 [2024-12-05 13:04:13.826036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:14.052 [2024-12-05 13:04:13.826043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:14.052 [2024-12-05 13:04:13.826051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:14.052 [2024-12-05 13:04:13.826058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:14.052 [2024-12-05 13:04:13.826066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:14.052 [2024-12-05 13:04:13.826072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:14.052 [2024-12-05 13:04:13.826080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:14.052 [2024-12-05 13:04:13.826090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:14.052 [2024-12-05 13:04:13.826098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:14.052 [2024-12-05 13:04:13.826106] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:14.052 [2024-12-05 13:04:13.826117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:14.052 [2024-12-05 13:04:13.826125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:14.052 [2024-12-05 13:04:13.826133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:14.052 [2024-12-05 13:04:13.826139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:14.052 [2024-12-05 13:04:13.826147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:14.052 [2024-12-05 13:04:13.826155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.826162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:14.052 [2024-12-05 13:04:13.826170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:33:14.052 [2024-12-05 13:04:13.826177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.833958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.833982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:14.052 [2024-12-05 13:04:13.833991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.741 ms 00:33:14.052 [2024-12-05 13:04:13.833999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.834075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.834084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:14.052 [2024-12-05 13:04:13.834098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:14.052 [2024-12-05 13:04:13.834105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.852198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.852245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:14.052 [2024-12-05 13:04:13.852260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.048 ms 00:33:14.052 [2024-12-05 13:04:13.852270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.852323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.852335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:14.052 [2024-12-05 13:04:13.852347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:14.052 [2024-12-05 13:04:13.852363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.852472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.852490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:14.052 [2024-12-05 13:04:13.852501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:14.052 [2024-12-05 13:04:13.852510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.852657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.852668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:14.052 [2024-12-05 13:04:13.852678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:33:14.052 [2024-12-05 13:04:13.852690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.859596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.859625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:14.052 [2024-12-05 13:04:13.859644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:33:14.052 [2024-12-05 13:04:13.859652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.859752] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:14.052 [2024-12-05 13:04:13.859764] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:14.052 [2024-12-05 13:04:13.859777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.859785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:14.052 [2024-12-05 13:04:13.859794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:33:14.052 [2024-12-05 13:04:13.859829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.872091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.872116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:14.052 [2024-12-05 13:04:13.872126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.245 ms 00:33:14.052 [2024-12-05 13:04:13.872134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.872258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.872268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:14.052 [2024-12-05 13:04:13.872276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:14.052 [2024-12-05 13:04:13.872287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.872327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.872339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:14.052 [2024-12-05 13:04:13.872347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:14.052 [2024-12-05 13:04:13.872362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.872672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.872682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:14.052 [2024-12-05 13:04:13.872694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:33:14.052 [2024-12-05 13:04:13.872701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.872718] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:14.052 [2024-12-05 13:04:13.872731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.872741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:14.052 [2024-12-05 13:04:13.872752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:14.052 [2024-12-05 13:04:13.872759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.881322] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:14.052 [2024-12-05 13:04:13.881445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.881455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:14.052 [2024-12-05 13:04:13.881469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.670 ms 00:33:14.052 [2024-12-05 13:04:13.881480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.052 [2024-12-05 13:04:13.883937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.052 [2024-12-05 13:04:13.883962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:14.052 [2024-12-05 13:04:13.883972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.437 ms 00:33:14.052 [2024-12-05 13:04:13.883981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.053 [2024-12-05 13:04:13.884057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.053 [2024-12-05 13:04:13.884067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:14.053 [2024-12-05 13:04:13.884076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:14.053 [2024-12-05 13:04:13.884086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.053 [2024-12-05 13:04:13.884121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.053 [2024-12-05 13:04:13.884130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:14.053 [2024-12-05 13:04:13.884138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:14.053 [2024-12-05 13:04:13.884146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.053 [2024-12-05 13:04:13.884179] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:14.053 [2024-12-05 13:04:13.884189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.053 [2024-12-05 13:04:13.884198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:14.053 [2024-12-05 13:04:13.884206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:14.053 [2024-12-05 13:04:13.884213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.053 [2024-12-05 13:04:13.888236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.053 [2024-12-05 13:04:13.888266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:14.053 [2024-12-05 13:04:13.888277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.001 ms 00:33:14.053 [2024-12-05 13:04:13.888286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.053 [2024-12-05 13:04:13.888350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.053 [2024-12-05 13:04:13.888359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:14.053 [2024-12-05 13:04:13.888367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:14.053 [2024-12-05 13:04:13.888374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.053 [2024-12-05 13:04:13.889385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 67.339 ms, result 0 00:33:15.424  [2024-12-05T13:04:16.209Z] Copying: 43/1024 [MB] (43 MBps) [2024-12-05T13:04:17.142Z] Copying: 88/1024 [MB] (45 MBps) [2024-12-05T13:04:18.125Z] Copying: 133/1024 [MB] (45 MBps) [2024-12-05T13:04:19.062Z] Copying: 179/1024 [MB] (45 MBps) [2024-12-05T13:04:19.993Z] Copying: 224/1024 [MB] (44 MBps) [2024-12-05T13:04:20.953Z] Copying: 269/1024 [MB] (45 MBps) [2024-12-05T13:04:22.336Z] Copying: 313/1024 [MB] (44 MBps) [2024-12-05T13:04:22.901Z] Copying: 358/1024 [MB] (44 MBps) [2024-12-05T13:04:24.273Z] Copying: 404/1024 [MB] (45 MBps) [2024-12-05T13:04:25.208Z] Copying: 451/1024 [MB] (46 MBps) [2024-12-05T13:04:26.140Z] Copying: 496/1024 [MB] (45 MBps) [2024-12-05T13:04:27.074Z] Copying: 541/1024 [MB] (44 MBps) [2024-12-05T13:04:28.006Z] Copying: 586/1024 [MB] (45 MBps) [2024-12-05T13:04:28.969Z] Copying: 631/1024 [MB] (45 MBps) [2024-12-05T13:04:29.901Z] Copying: 685/1024 [MB] (53 MBps) [2024-12-05T13:04:31.273Z] Copying: 732/1024 [MB] (46 MBps) [2024-12-05T13:04:32.208Z] Copying: 776/1024 [MB] (43 MBps) [2024-12-05T13:04:33.140Z] Copying: 820/1024 [MB] (44 MBps) [2024-12-05T13:04:34.083Z] Copying: 861/1024 [MB] (40 MBps) [2024-12-05T13:04:35.014Z] Copying: 906/1024 [MB] (45 MBps) [2024-12-05T13:04:35.943Z] Copying: 950/1024 [MB] (43 MBps) [2024-12-05T13:04:37.313Z] Copying: 995/1024 [MB] (45 MBps) [2024-12-05T13:04:37.880Z] Copying: 1023/1024 [MB] (27 MBps) [2024-12-05T13:04:37.880Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-12-05 13:04:37.591889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.028 [2024-12-05 13:04:37.591952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:38.028 [2024-12-05 13:04:37.591970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:38.028 [2024-12-05 13:04:37.591978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.028 [2024-12-05 13:04:37.593313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:38.028 [2024-12-05 13:04:37.596016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.028 [2024-12-05 13:04:37.596052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:38.028 [2024-12-05 13:04:37.596062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:33:38.028 [2024-12-05 13:04:37.596070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.028 [2024-12-05 13:04:37.605084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.028 [2024-12-05 13:04:37.605120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:38.028 [2024-12-05 13:04:37.605131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.702 ms 00:33:38.028 [2024-12-05 13:04:37.605139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.028 [2024-12-05 13:04:37.605174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.028 [2024-12-05 13:04:37.605183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:38.028 [2024-12-05 13:04:37.605192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:38.028 [2024-12-05 13:04:37.605199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.028 [2024-12-05 13:04:37.605247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.028 [2024-12-05 13:04:37.605261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:38.028 [2024-12-05 13:04:37.605269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:38.028 [2024-12-05 13:04:37.605276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.028 [2024-12-05 13:04:37.605289] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:38.028 [2024-12-05 13:04:37.605301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:33:38.028 [2024-12-05 13:04:37.605315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:38.028 [2024-12-05 13:04:37.605773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.605995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:38.029 [2024-12-05 13:04:37.606093] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:38.029 [2024-12-05 13:04:37.606101] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d3bffc2-01da-4dfa-8846-a146b1722d32 00:33:38.029 [2024-12-05 13:04:37.606109] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:33:38.029 [2024-12-05 13:04:37.606116] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:33:38.029 [2024-12-05 13:04:37.606123] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:33:38.029 [2024-12-05 13:04:37.606131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:38.029 [2024-12-05 13:04:37.606140] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:38.029 [2024-12-05 13:04:37.606147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:38.029 [2024-12-05 13:04:37.606155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:38.029 [2024-12-05 13:04:37.606161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:38.029 [2024-12-05 13:04:37.606167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:38.029 [2024-12-05 13:04:37.606174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.029 [2024-12-05 13:04:37.606181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:38.029 [2024-12-05 13:04:37.606192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:33:38.029 [2024-12-05 13:04:37.606200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.608027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.029 [2024-12-05 13:04:37.608054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:38.029 [2024-12-05 13:04:37.608069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:33:38.029 [2024-12-05 13:04:37.608077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.608169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.029 [2024-12-05 13:04:37.608179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:38.029 [2024-12-05 13:04:37.608187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:33:38.029 [2024-12-05 13:04:37.608194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.614254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.614296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:38.029 [2024-12-05 13:04:37.614306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.614313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.614373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.614382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:38.029 [2024-12-05 13:04:37.614390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.614397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.614444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.614456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:38.029 [2024-12-05 13:04:37.614465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.614472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.614487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.614495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:38.029 [2024-12-05 13:04:37.614503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.614510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.626248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.626303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:38.029 [2024-12-05 13:04:37.626315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.626322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.636117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.636183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:38.029 [2024-12-05 13:04:37.636202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.636228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.636294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.636305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:38.029 [2024-12-05 13:04:37.636319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.636327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.636382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.636393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:38.029 [2024-12-05 13:04:37.636402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.636417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.636474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.636484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:38.029 [2024-12-05 13:04:37.636492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.029 [2024-12-05 13:04:37.636502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.029 [2024-12-05 13:04:37.636530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.029 [2024-12-05 13:04:37.636539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:38.029 [2024-12-05 13:04:37.636547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.030 [2024-12-05 13:04:37.636554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.030 [2024-12-05 13:04:37.636592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.030 [2024-12-05 13:04:37.636600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:38.030 [2024-12-05 13:04:37.636608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.030 [2024-12-05 13:04:37.636615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.030 [2024-12-05 13:04:37.636685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:38.030 [2024-12-05 13:04:37.636696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:38.030 [2024-12-05 13:04:37.636704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:38.030 [2024-12-05 13:04:37.636712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.030 [2024-12-05 13:04:37.636893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 47.731 ms, result 0 00:33:39.928 00:33:39.928 00:33:39.928 13:04:39 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:39.928 [2024-12-05 13:04:39.506067] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:33:39.928 [2024-12-05 13:04:39.506201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93423 ] 00:33:39.928 [2024-12-05 13:04:39.667959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.928 [2024-12-05 13:04:39.692895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.187 [2024-12-05 13:04:39.797408] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:40.187 [2024-12-05 13:04:39.797483] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:40.187 [2024-12-05 13:04:39.951440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.187 [2024-12-05 13:04:39.951501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:40.187 [2024-12-05 13:04:39.951519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:40.187 [2024-12-05 13:04:39.951528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.187 [2024-12-05 13:04:39.951580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.187 [2024-12-05 13:04:39.951590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:40.187 [2024-12-05 13:04:39.951602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:40.187 [2024-12-05 13:04:39.951614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.187 [2024-12-05 13:04:39.951640] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:40.187 [2024-12-05 13:04:39.951888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:40.187 [2024-12-05 13:04:39.951905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.187 [2024-12-05 13:04:39.951913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:40.187 [2024-12-05 13:04:39.951927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:33:40.187 [2024-12-05 13:04:39.951935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.187 [2024-12-05 13:04:39.952179] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:40.187 [2024-12-05 13:04:39.952200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.187 [2024-12-05 13:04:39.952209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:40.187 [2024-12-05 13:04:39.952217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:40.187 [2024-12-05 13:04:39.952228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.187 [2024-12-05 13:04:39.952305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.187 [2024-12-05 13:04:39.952316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:40.187 [2024-12-05 13:04:39.952324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:40.187 [2024-12-05 13:04:39.952332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.187 [2024-12-05 13:04:39.952565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.187 [2024-12-05 13:04:39.952576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:40.187 [2024-12-05 13:04:39.952590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:33:40.187 [2024-12-05 13:04:39.952600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.187 [2024-12-05 13:04:39.952672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.187 [2024-12-05 13:04:39.952682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:40.187 [2024-12-05 13:04:39.952689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:40.187 [2024-12-05 13:04:39.952696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.187 [2024-12-05 13:04:39.952716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.188 [2024-12-05 13:04:39.952724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:40.188 [2024-12-05 13:04:39.952736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:40.188 [2024-12-05 13:04:39.952743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.188 [2024-12-05 13:04:39.952762] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:40.188 [2024-12-05 13:04:39.954528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.188 [2024-12-05 13:04:39.954552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:40.188 [2024-12-05 13:04:39.954562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.772 ms 00:33:40.188 [2024-12-05 13:04:39.954570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.188 [2024-12-05 13:04:39.954610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.188 [2024-12-05 13:04:39.954619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:40.188 [2024-12-05 13:04:39.954634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:40.188 [2024-12-05 13:04:39.954641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.188 [2024-12-05 13:04:39.954660] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:40.188 [2024-12-05 13:04:39.954682] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:40.188 [2024-12-05 13:04:39.954722] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:40.188 [2024-12-05 13:04:39.954737] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:40.188 [2024-12-05 13:04:39.954852] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:40.188 [2024-12-05 13:04:39.954863] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:40.188 [2024-12-05 13:04:39.954874] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:40.188 [2024-12-05 13:04:39.954883] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:40.188 [2024-12-05 13:04:39.954895] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:40.188 [2024-12-05 13:04:39.954903] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:40.188 [2024-12-05 13:04:39.954910] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:40.188 [2024-12-05 13:04:39.954917] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:40.188 [2024-12-05 13:04:39.954927] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:40.188 [2024-12-05 13:04:39.954934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.188 [2024-12-05 13:04:39.954941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:40.188 [2024-12-05 13:04:39.954948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:33:40.188 [2024-12-05 13:04:39.954955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.188 [2024-12-05 13:04:39.955045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.188 [2024-12-05 13:04:39.955056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:40.188 [2024-12-05 13:04:39.955064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:40.188 [2024-12-05 13:04:39.955071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.188 [2024-12-05 13:04:39.955180] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:40.188 [2024-12-05 13:04:39.955191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:40.188 [2024-12-05 13:04:39.955200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:40.188 [2024-12-05 13:04:39.955209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:40.188 [2024-12-05 13:04:39.955234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:40.188 [2024-12-05 13:04:39.955254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:40.188 [2024-12-05 13:04:39.955262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:40.188 [2024-12-05 13:04:39.955278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:40.188 [2024-12-05 13:04:39.955286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:40.188 [2024-12-05 13:04:39.955294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:40.188 [2024-12-05 13:04:39.955302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:40.188 [2024-12-05 13:04:39.955310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:40.188 [2024-12-05 13:04:39.955317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:40.188 [2024-12-05 13:04:39.955333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:40.188 [2024-12-05 13:04:39.955341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:40.188 [2024-12-05 13:04:39.955356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:40.188 [2024-12-05 13:04:39.955373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:40.188 [2024-12-05 13:04:39.955381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:40.188 [2024-12-05 13:04:39.955395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:40.188 [2024-12-05 13:04:39.955402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:40.188 [2024-12-05 13:04:39.955417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:40.188 [2024-12-05 13:04:39.955425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:40.188 [2024-12-05 13:04:39.955440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:40.188 [2024-12-05 13:04:39.955447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:40.188 [2024-12-05 13:04:39.955463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:40.188 [2024-12-05 13:04:39.955471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:40.188 [2024-12-05 13:04:39.955478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:40.188 [2024-12-05 13:04:39.955487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:40.188 [2024-12-05 13:04:39.955496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:40.188 [2024-12-05 13:04:39.955503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:40.188 [2024-12-05 13:04:39.955518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:40.188 [2024-12-05 13:04:39.955527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955534] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:40.188 [2024-12-05 13:04:39.955543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:40.188 [2024-12-05 13:04:39.955552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:40.188 [2024-12-05 13:04:39.955562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:40.188 [2024-12-05 13:04:39.955570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:40.188 [2024-12-05 13:04:39.955578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:40.188 [2024-12-05 13:04:39.955585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:40.188 [2024-12-05 13:04:39.955592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:40.189 [2024-12-05 13:04:39.955598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:40.189 [2024-12-05 13:04:39.955605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:40.189 [2024-12-05 13:04:39.955614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:40.189 [2024-12-05 13:04:39.955628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:40.189 [2024-12-05 13:04:39.955637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:40.189 [2024-12-05 13:04:39.955644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:40.189 [2024-12-05 13:04:39.955651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:40.189 [2024-12-05 13:04:39.955658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:40.189 [2024-12-05 13:04:39.955665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:40.189 [2024-12-05 13:04:39.955672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:40.189 [2024-12-05 13:04:39.955679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:40.189 [2024-12-05 13:04:39.955686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:40.189 [2024-12-05 13:04:39.955693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:40.189 [2024-12-05 13:04:39.955700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:40.189 [2024-12-05 13:04:39.955707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:40.189 [2024-12-05 13:04:39.955714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:40.189 [2024-12-05 13:04:39.955721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:40.189 [2024-12-05 13:04:39.955728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:40.189 [2024-12-05 13:04:39.955736] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:40.189 [2024-12-05 13:04:39.955746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:40.189 [2024-12-05 13:04:39.955755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:40.189 [2024-12-05 13:04:39.955763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:40.189 [2024-12-05 13:04:39.955770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:40.189 [2024-12-05 13:04:39.955777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:40.189 [2024-12-05 13:04:39.955784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.955792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:40.189 [2024-12-05 13:04:39.955799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:33:40.189 [2024-12-05 13:04:39.955820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:39.963745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.963770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:40.189 [2024-12-05 13:04:39.963785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.873 ms 00:33:40.189 [2024-12-05 13:04:39.963793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:39.963884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.963893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:40.189 [2024-12-05 13:04:39.963901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:40.189 [2024-12-05 13:04:39.963913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:39.983290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.983329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:40.189 [2024-12-05 13:04:39.983341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.333 ms 00:33:40.189 [2024-12-05 13:04:39.983355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:39.983397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.983407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:40.189 [2024-12-05 13:04:39.983421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:40.189 [2024-12-05 13:04:39.983429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:39.983527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.983541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:40.189 [2024-12-05 13:04:39.983550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:33:40.189 [2024-12-05 13:04:39.983557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:39.983676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.983684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:40.189 [2024-12-05 13:04:39.983693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:33:40.189 [2024-12-05 13:04:39.983704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:39.990166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.990198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:40.189 [2024-12-05 13:04:39.990220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:33:40.189 [2024-12-05 13:04:39.990228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:39.990331] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:40.189 [2024-12-05 13:04:39.990348] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:40.189 [2024-12-05 13:04:39.990358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:39.990367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:40.189 [2024-12-05 13:04:39.990374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:40.189 [2024-12-05 13:04:39.990388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:40.002969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:40.002997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:40.189 [2024-12-05 13:04:40.003008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.566 ms 00:33:40.189 [2024-12-05 13:04:40.003017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:40.003134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:40.003142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:40.189 [2024-12-05 13:04:40.003154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:33:40.189 [2024-12-05 13:04:40.003163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:40.003206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:40.003218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:40.189 [2024-12-05 13:04:40.003226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:40.189 [2024-12-05 13:04:40.003234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:40.003547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:40.003557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:40.189 [2024-12-05 13:04:40.003565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:33:40.189 [2024-12-05 13:04:40.003572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.189 [2024-12-05 13:04:40.003586] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:40.189 [2024-12-05 13:04:40.003596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.189 [2024-12-05 13:04:40.003605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:40.189 [2024-12-05 13:04:40.003615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:40.189 [2024-12-05 13:04:40.003623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.190 [2024-12-05 13:04:40.012362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:40.190 [2024-12-05 13:04:40.012496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.190 [2024-12-05 13:04:40.012506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:40.190 [2024-12-05 13:04:40.012515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.856 ms 00:33:40.190 [2024-12-05 13:04:40.012527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.190 [2024-12-05 13:04:40.014937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.190 [2024-12-05 13:04:40.014959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:40.190 [2024-12-05 13:04:40.014970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:33:40.190 [2024-12-05 13:04:40.014978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.190 [2024-12-05 13:04:40.015033] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:40.190 [2024-12-05 13:04:40.015641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.190 [2024-12-05 13:04:40.015657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:40.190 [2024-12-05 13:04:40.015669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:33:40.190 [2024-12-05 13:04:40.015677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.190 [2024-12-05 13:04:40.015716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.190 [2024-12-05 13:04:40.015725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:40.190 [2024-12-05 13:04:40.015738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:40.190 [2024-12-05 13:04:40.015748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.190 [2024-12-05 13:04:40.015783] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:40.190 [2024-12-05 13:04:40.015797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.190 [2024-12-05 13:04:40.015805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:40.190 [2024-12-05 13:04:40.015828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:40.190 [2024-12-05 13:04:40.015838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.190 [2024-12-05 13:04:40.019720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.190 [2024-12-05 13:04:40.019752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:40.190 [2024-12-05 13:04:40.019763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.865 ms 00:33:40.190 [2024-12-05 13:04:40.019771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.190 [2024-12-05 13:04:40.019850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:40.190 [2024-12-05 13:04:40.019860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:40.190 [2024-12-05 13:04:40.019872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:40.190 [2024-12-05 13:04:40.019880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:40.190 [2024-12-05 13:04:40.023724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 70.731 ms, result 0 00:33:41.577  [2024-12-05T13:04:42.362Z] Copying: 42/1024 [MB] (42 MBps) [2024-12-05T13:04:43.303Z] Copying: 88/1024 [MB] (45 MBps) [2024-12-05T13:04:44.243Z] Copying: 135/1024 [MB] (47 MBps) [2024-12-05T13:04:45.659Z] Copying: 184/1024 [MB] (48 MBps) [2024-12-05T13:04:46.244Z] Copying: 234/1024 [MB] (49 MBps) [2024-12-05T13:04:47.616Z] Copying: 283/1024 [MB] (49 MBps) [2024-12-05T13:04:48.547Z] Copying: 331/1024 [MB] (48 MBps) [2024-12-05T13:04:49.478Z] Copying: 378/1024 [MB] (46 MBps) [2024-12-05T13:04:50.408Z] Copying: 426/1024 [MB] (47 MBps) [2024-12-05T13:04:51.339Z] Copying: 476/1024 [MB] (49 MBps) [2024-12-05T13:04:52.271Z] Copying: 523/1024 [MB] (47 MBps) [2024-12-05T13:04:53.205Z] Copying: 568/1024 [MB] (45 MBps) [2024-12-05T13:04:54.587Z] Copying: 615/1024 [MB] (47 MBps) [2024-12-05T13:04:55.531Z] Copying: 649/1024 [MB] (33 MBps) [2024-12-05T13:04:56.469Z] Copying: 674/1024 [MB] (25 MBps) [2024-12-05T13:04:57.433Z] Copying: 699/1024 [MB] (24 MBps) [2024-12-05T13:04:58.371Z] Copying: 732/1024 [MB] (32 MBps) [2024-12-05T13:04:59.306Z] Copying: 771/1024 [MB] (39 MBps) [2024-12-05T13:05:00.239Z] Copying: 818/1024 [MB] (46 MBps) [2024-12-05T13:05:01.611Z] Copying: 866/1024 [MB] (48 MBps) [2024-12-05T13:05:02.543Z] Copying: 917/1024 [MB] (51 MBps) [2024-12-05T13:05:03.476Z] Copying: 966/1024 [MB] (49 MBps) [2024-12-05T13:05:03.476Z] Copying: 1019/1024 [MB] (52 MBps) [2024-12-05T13:05:03.736Z] Copying: 1024/1024 [MB] (average 44 MBps)[2024-12-05 13:05:03.609218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.884 [2024-12-05 13:05:03.609301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:03.884 [2024-12-05 13:05:03.609317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:03.884 [2024-12-05 13:05:03.609329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.884 [2024-12-05 13:05:03.609352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:03.884 [2024-12-05 13:05:03.609997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.884 [2024-12-05 13:05:03.610028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:03.884 [2024-12-05 13:05:03.610040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:34:03.884 [2024-12-05 13:05:03.610063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.884 [2024-12-05 13:05:03.610346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.884 [2024-12-05 13:05:03.610366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:03.884 [2024-12-05 13:05:03.610384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:34:03.884 [2024-12-05 13:05:03.610396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.884 [2024-12-05 13:05:03.610432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.884 [2024-12-05 13:05:03.610445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:03.884 [2024-12-05 13:05:03.610457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:03.884 [2024-12-05 13:05:03.610469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.884 [2024-12-05 13:05:03.610537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.884 [2024-12-05 13:05:03.610548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:03.884 [2024-12-05 13:05:03.610559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:03.884 [2024-12-05 13:05:03.610570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.884 [2024-12-05 13:05:03.610587] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:03.884 [2024-12-05 13:05:03.610607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:03.884 [2024-12-05 13:05:03.610620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.610962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.611595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.611605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.611615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.611627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.611636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.611646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:03.884 [2024-12-05 13:05:03.611656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.611999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:03.885 [2024-12-05 13:05:03.612279] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:03.885 [2024-12-05 13:05:03.612289] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9d3bffc2-01da-4dfa-8846-a146b1722d32 00:34:03.885 [2024-12-05 13:05:03.612299] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:03.885 [2024-12-05 13:05:03.612317] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:34:03.885 [2024-12-05 13:05:03.612326] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:34:03.885 [2024-12-05 13:05:03.612339] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:34:03.885 [2024-12-05 13:05:03.612348] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:03.885 [2024-12-05 13:05:03.612358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:03.885 [2024-12-05 13:05:03.612367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:03.885 [2024-12-05 13:05:03.612376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:03.885 [2024-12-05 13:05:03.612385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:03.885 [2024-12-05 13:05:03.612394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.885 [2024-12-05 13:05:03.612405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:03.885 [2024-12-05 13:05:03.612415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:34:03.885 [2024-12-05 13:05:03.612424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.885 [2024-12-05 13:05:03.614468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.885 [2024-12-05 13:05:03.614508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:03.885 [2024-12-05 13:05:03.614520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:34:03.885 [2024-12-05 13:05:03.614529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.885 [2024-12-05 13:05:03.614633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:03.885 [2024-12-05 13:05:03.614644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:03.885 [2024-12-05 13:05:03.614654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:34:03.885 [2024-12-05 13:05:03.614664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.885 [2024-12-05 13:05:03.623404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.885 [2024-12-05 13:05:03.623445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:03.885 [2024-12-05 13:05:03.623455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.885 [2024-12-05 13:05:03.623463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.885 [2024-12-05 13:05:03.623531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.885 [2024-12-05 13:05:03.623540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:03.885 [2024-12-05 13:05:03.623548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.885 [2024-12-05 13:05:03.623556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.885 [2024-12-05 13:05:03.623616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.885 [2024-12-05 13:05:03.623636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:03.885 [2024-12-05 13:05:03.623645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.885 [2024-12-05 13:05:03.623653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.885 [2024-12-05 13:05:03.623668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.623676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:03.886 [2024-12-05 13:05:03.623684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.623691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.635173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.635219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:03.886 [2024-12-05 13:05:03.635230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.635238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.644835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.644877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:03.886 [2024-12-05 13:05:03.644888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.644896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.644943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.644953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:03.886 [2024-12-05 13:05:03.644967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.644975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.645001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.645014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:03.886 [2024-12-05 13:05:03.645022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.645036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.645097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.645108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:03.886 [2024-12-05 13:05:03.645115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.645126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.645149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.645158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:03.886 [2024-12-05 13:05:03.645166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.645173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.645211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.645221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:03.886 [2024-12-05 13:05:03.645229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.645239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.645279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:03.886 [2024-12-05 13:05:03.645290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:03.886 [2024-12-05 13:05:03.645298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:03.886 [2024-12-05 13:05:03.645306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:03.886 [2024-12-05 13:05:03.645438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 36.192 ms, result 0 00:34:04.206 00:34:04.206 00:34:04.206 13:05:03 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:06.769 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92294 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92294 ']' 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92294 00:34:06.769 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92294) - No such process 00:34:06.769 Process with pid 92294 is not found 00:34:06.769 Remove shared memory files 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 92294 is not found' 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_band_md /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_l2p_l1 /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_l2p_l2 /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_l2p_l2_ctx /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_nvc_md /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_p2l_pool /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_sb /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_sb_shm /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_trim_bitmap /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_trim_log /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_trim_md /dev/hugepages/ftl_9d3bffc2-01da-4dfa-8846-a146b1722d32_vmap 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:06.769 00:34:06.769 real 2m20.067s 00:34:06.769 user 2m8.929s 00:34:06.769 sys 0m12.225s 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.769 ************************************ 00:34:06.769 13:05:06 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:06.769 END TEST ftl_restore_fast 00:34:06.769 ************************************ 00:34:06.769 13:05:06 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:06.769 13:05:06 ftl -- ftl/ftl.sh@14 -- # killprocess 86524 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@954 -- # '[' -z 86524 ']' 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@958 -- # kill -0 86524 00:34:06.769 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86524) - No such process 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 86524 is not found' 00:34:06.769 Process with pid 86524 is not found 00:34:06.769 13:05:06 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:06.769 13:05:06 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=93716 00:34:06.769 13:05:06 ftl -- ftl/ftl.sh@20 -- # waitforlisten 93716 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@835 -- # '[' -z 93716 ']' 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.769 13:05:06 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:06.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:06.769 13:05:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:06.769 [2024-12-05 13:05:06.263525] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 23.11.0 initialization... 00:34:06.769 [2024-12-05 13:05:06.263659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93716 ] 00:34:06.769 [2024-12-05 13:05:06.424515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.769 [2024-12-05 13:05:06.449293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.334 13:05:07 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:07.334 13:05:07 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:07.334 13:05:07 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:07.593 nvme0n1 00:34:07.593 13:05:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:07.593 13:05:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:07.593 13:05:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:07.850 13:05:07 ftl -- ftl/common.sh@28 -- # stores=f3c26d7e-6183-4dfd-8dfc-61c420183d41 00:34:07.850 13:05:07 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:07.850 13:05:07 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3c26d7e-6183-4dfd-8dfc-61c420183d41 00:34:08.109 13:05:07 ftl -- ftl/ftl.sh@23 -- # killprocess 93716 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@954 -- # '[' -z 93716 ']' 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@958 -- # kill -0 93716 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@959 -- # uname 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93716 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:08.109 killing process with pid 93716 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93716' 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@973 -- # kill 93716 00:34:08.109 13:05:07 ftl -- common/autotest_common.sh@978 -- # wait 93716 00:34:08.367 13:05:08 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:08.625 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:08.625 Waiting for block devices as requested 00:34:08.625 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:08.625 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:08.887 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:08.887 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:14.214 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:14.214 Remove shared memory files 00:34:14.214 13:05:13 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:14.214 13:05:13 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:14.214 13:05:13 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:14.214 13:05:13 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:14.214 13:05:13 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:14.214 13:05:13 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:14.214 13:05:13 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:14.214 00:34:14.214 real 10m35.845s 00:34:14.214 user 12m33.270s 00:34:14.214 sys 1m21.709s 00:34:14.214 13:05:13 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.214 ************************************ 00:34:14.214 END TEST ftl 00:34:14.214 ************************************ 00:34:14.214 13:05:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:14.214 13:05:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:14.214 13:05:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:14.214 13:05:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:14.214 13:05:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:14.214 13:05:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:14.214 13:05:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:14.214 13:05:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:14.214 13:05:13 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:14.214 13:05:13 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:14.214 13:05:13 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:14.214 13:05:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:14.214 13:05:13 -- common/autotest_common.sh@10 -- # set +x 00:34:14.214 13:05:13 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:14.214 13:05:13 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:14.214 13:05:13 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:14.214 13:05:13 -- common/autotest_common.sh@10 -- # set +x 00:34:15.142 INFO: APP EXITING 00:34:15.142 INFO: killing all VMs 00:34:15.142 INFO: killing vhost app 00:34:15.142 INFO: EXIT DONE 00:34:15.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:15.657 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:15.657 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:15.657 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:15.657 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:15.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:16.476 Cleaning 00:34:16.476 Removing: /var/run/dpdk/spdk0/config 00:34:16.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:16.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:16.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:16.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:16.476 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:16.476 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:16.476 Removing: /var/run/dpdk/spdk0 00:34:16.476 Removing: /var/run/dpdk/spdk_pid69418 00:34:16.476 Removing: /var/run/dpdk/spdk_pid69582 00:34:16.476 Removing: /var/run/dpdk/spdk_pid69783 00:34:16.476 Removing: /var/run/dpdk/spdk_pid69871 00:34:16.476 Removing: /var/run/dpdk/spdk_pid69894 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70005 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70023 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70206 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70274 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70359 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70459 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70534 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70573 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70610 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70675 00:34:16.476 Removing: /var/run/dpdk/spdk_pid70770 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71195 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71237 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71289 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71305 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71363 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71379 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71437 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71453 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71495 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71513 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71555 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71573 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71706 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71742 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71826 00:34:16.476 Removing: /var/run/dpdk/spdk_pid71987 00:34:16.476 Removing: /var/run/dpdk/spdk_pid72054 00:34:16.476 Removing: /var/run/dpdk/spdk_pid72085 00:34:16.476 Removing: /var/run/dpdk/spdk_pid72508 00:34:16.476 Removing: /var/run/dpdk/spdk_pid72601 00:34:16.476 Removing: /var/run/dpdk/spdk_pid72701 00:34:16.476 Removing: /var/run/dpdk/spdk_pid72743 00:34:16.476 Removing: /var/run/dpdk/spdk_pid72775 00:34:16.476 Removing: /var/run/dpdk/spdk_pid72849 00:34:16.476 Removing: /var/run/dpdk/spdk_pid73469 00:34:16.476 Removing: /var/run/dpdk/spdk_pid73500 00:34:16.476 Removing: /var/run/dpdk/spdk_pid73960 00:34:16.476 Removing: /var/run/dpdk/spdk_pid74052 00:34:16.476 Removing: /var/run/dpdk/spdk_pid74150 00:34:16.476 Removing: /var/run/dpdk/spdk_pid74191 00:34:16.476 Removing: /var/run/dpdk/spdk_pid74212 00:34:16.476 Removing: /var/run/dpdk/spdk_pid74232 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76057 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76172 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76187 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76199 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76238 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76242 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76254 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76300 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76304 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76316 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76361 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76365 00:34:16.476 Removing: /var/run/dpdk/spdk_pid76379 00:34:16.476 Removing: /var/run/dpdk/spdk_pid77772 00:34:16.476 Removing: /var/run/dpdk/spdk_pid77858 00:34:16.476 Removing: /var/run/dpdk/spdk_pid79248 00:34:16.476 Removing: /var/run/dpdk/spdk_pid80992 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81050 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81117 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81223 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81304 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81394 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81446 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81516 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81614 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81695 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81785 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81837 00:34:16.476 Removing: /var/run/dpdk/spdk_pid81907 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82000 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82086 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82171 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82234 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82298 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82391 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82472 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82562 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82620 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82684 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82748 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82820 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82919 00:34:16.476 Removing: /var/run/dpdk/spdk_pid82999 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83082 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83140 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83203 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83272 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83335 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83434 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83519 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83657 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83919 00:34:16.476 Removing: /var/run/dpdk/spdk_pid83949 00:34:16.476 Removing: /var/run/dpdk/spdk_pid84393 00:34:16.476 Removing: /var/run/dpdk/spdk_pid84568 00:34:16.476 Removing: /var/run/dpdk/spdk_pid84657 00:34:16.476 Removing: /var/run/dpdk/spdk_pid84761 00:34:16.476 Removing: /var/run/dpdk/spdk_pid84798 00:34:16.476 Removing: /var/run/dpdk/spdk_pid84823 00:34:16.476 Removing: /var/run/dpdk/spdk_pid85135 00:34:16.476 Removing: /var/run/dpdk/spdk_pid85173 00:34:16.476 Removing: /var/run/dpdk/spdk_pid85223 00:34:16.476 Removing: /var/run/dpdk/spdk_pid85587 00:34:16.476 Removing: /var/run/dpdk/spdk_pid85733 00:34:16.476 Removing: /var/run/dpdk/spdk_pid86524 00:34:16.476 Removing: /var/run/dpdk/spdk_pid86645 00:34:16.734 Removing: /var/run/dpdk/spdk_pid86810 00:34:16.734 Removing: /var/run/dpdk/spdk_pid86888 00:34:16.734 Removing: /var/run/dpdk/spdk_pid87170 00:34:16.734 Removing: /var/run/dpdk/spdk_pid87394 00:34:16.734 Removing: /var/run/dpdk/spdk_pid87723 00:34:16.734 Removing: /var/run/dpdk/spdk_pid87879 00:34:16.734 Removing: /var/run/dpdk/spdk_pid87959 00:34:16.734 Removing: /var/run/dpdk/spdk_pid88001 00:34:16.734 Removing: /var/run/dpdk/spdk_pid88094 00:34:16.734 Removing: /var/run/dpdk/spdk_pid88114 00:34:16.734 Removing: /var/run/dpdk/spdk_pid88155 00:34:16.734 Removing: /var/run/dpdk/spdk_pid88305 00:34:16.734 Removing: /var/run/dpdk/spdk_pid88512 00:34:16.734 Removing: /var/run/dpdk/spdk_pid88786 00:34:16.734 Removing: /var/run/dpdk/spdk_pid89115 00:34:16.734 Removing: /var/run/dpdk/spdk_pid89380 00:34:16.734 Removing: /var/run/dpdk/spdk_pid89720 00:34:16.734 Removing: /var/run/dpdk/spdk_pid89852 00:34:16.734 Removing: /var/run/dpdk/spdk_pid89934 00:34:16.734 Removing: /var/run/dpdk/spdk_pid90338 00:34:16.734 Removing: /var/run/dpdk/spdk_pid90402 00:34:16.734 Removing: /var/run/dpdk/spdk_pid90731 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91046 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91386 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91508 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91537 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91591 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91642 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91700 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91876 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91926 00:34:16.734 Removing: /var/run/dpdk/spdk_pid91983 00:34:16.734 Removing: /var/run/dpdk/spdk_pid92083 00:34:16.734 Removing: /var/run/dpdk/spdk_pid92105 00:34:16.734 Removing: /var/run/dpdk/spdk_pid92165 00:34:16.734 Removing: /var/run/dpdk/spdk_pid92294 00:34:16.734 Removing: /var/run/dpdk/spdk_pid92497 00:34:16.734 Removing: /var/run/dpdk/spdk_pid92899 00:34:16.734 Removing: /var/run/dpdk/spdk_pid93159 00:34:16.734 Removing: /var/run/dpdk/spdk_pid93423 00:34:16.734 Removing: /var/run/dpdk/spdk_pid93716 00:34:16.734 Clean 00:34:16.734 13:05:16 -- common/autotest_common.sh@1453 -- # return 0 00:34:16.734 13:05:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:16.734 13:05:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.734 13:05:16 -- common/autotest_common.sh@10 -- # set +x 00:34:16.734 13:05:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:16.734 13:05:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.734 13:05:16 -- common/autotest_common.sh@10 -- # set +x 00:34:16.734 13:05:16 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:16.734 13:05:16 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:16.734 13:05:16 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:16.735 13:05:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:16.735 13:05:16 -- spdk/autotest.sh@398 -- # hostname 00:34:16.735 13:05:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:16.992 geninfo: WARNING: invalid characters removed from testname! 00:34:43.522 13:05:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:43.522 13:05:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:46.048 13:05:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:48.000 13:05:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:49.901 13:05:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:51.800 13:05:51 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:54.331 13:05:53 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:54.331 13:05:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:54.331 13:05:53 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:54.331 13:05:53 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:54.331 13:05:53 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:54.331 13:05:53 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:54.331 + [[ -n 5764 ]] 00:34:54.331 + sudo kill 5764 00:34:54.339 [Pipeline] } 00:34:54.353 [Pipeline] // timeout 00:34:54.358 [Pipeline] } 00:34:54.372 [Pipeline] // stage 00:34:54.376 [Pipeline] } 00:34:54.389 [Pipeline] // catchError 00:34:54.396 [Pipeline] stage 00:34:54.398 [Pipeline] { (Stop VM) 00:34:54.410 [Pipeline] sh 00:34:54.685 + vagrant halt 00:34:57.210 ==> default: Halting domain... 00:35:00.525 [Pipeline] sh 00:35:00.800 + vagrant destroy -f 00:35:03.329 ==> default: Removing domain... 00:35:04.275 [Pipeline] sh 00:35:04.552 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:04.561 [Pipeline] } 00:35:04.577 [Pipeline] // stage 00:35:04.583 [Pipeline] } 00:35:04.597 [Pipeline] // dir 00:35:04.603 [Pipeline] } 00:35:04.619 [Pipeline] // wrap 00:35:04.626 [Pipeline] } 00:35:04.638 [Pipeline] // catchError 00:35:04.648 [Pipeline] stage 00:35:04.651 [Pipeline] { (Epilogue) 00:35:04.664 [Pipeline] sh 00:35:04.943 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:10.209 [Pipeline] catchError 00:35:10.212 [Pipeline] { 00:35:10.226 [Pipeline] sh 00:35:10.505 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:10.505 Artifacts sizes are good 00:35:10.513 [Pipeline] } 00:35:10.528 [Pipeline] // catchError 00:35:10.541 [Pipeline] archiveArtifacts 00:35:10.549 Archiving artifacts 00:35:10.702 [Pipeline] cleanWs 00:35:10.712 [WS-CLEANUP] Deleting project workspace... 00:35:10.712 [WS-CLEANUP] Deferred wipeout is used... 00:35:10.718 [WS-CLEANUP] done 00:35:10.721 [Pipeline] } 00:35:10.736 [Pipeline] // stage 00:35:10.742 [Pipeline] } 00:35:10.756 [Pipeline] // node 00:35:10.762 [Pipeline] End of Pipeline 00:35:10.801 Finished: SUCCESS